Auto ClassesΒΆ
In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you
are supplying to the from_pretrained()
method. AutoClasses are here to do this job for you so that you
automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary.
Instantiating one of AutoConfig
, AutoModel
, and
AutoTokenizer
will directly create a class of the relevant architecture. For instance
model = AutoModel.from_pretrained('bert-base-cased')
will create a model that is an instance of BertModel
.
There is one class of AutoModel
for each task, and for each backend (PyTorch, TensorFlow, or Flax).
Extending the Auto ClassesΒΆ
Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a
custom class of model NewModel
, make sure you have a NewModelConfig
then you can add those to the auto
classes like this:
from transformers import AutoConfig, AutoModel
AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)
You will then be able to use the auto classes like you would usually do!
Warning
If your NewModelConfig
is a subclass of PretrainedConfig
, make sure its
model_type
attribute is set to the same key you use when registering the config (here "new-model"
).
Likewise, if your NewModel
is a subclass of PreTrainedModel
, make sure its
config_class
attribute is set to the same class you use when registering the model (here
NewModelConfig
).
AutoConfigΒΆ
-
class
transformers.
AutoConfig
[source]ΒΆ This is a generic configuration class that will be instantiated as one of the configuration classes of the library when created with the
from_pretrained()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_pretrained
(pretrained_model_name_or_path, **kwargs)[source]ΒΆ Instantiate one of the configuration classes of the library from a pretrained model configuration.
The configuration class to instantiate is selected based on the
model_type
property of the config object that is loaded, or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
AlbertConfig
(ALBERT model)bart β
BartConfig
(BART model)beit β
BeitConfig
(BEiT model)bert β
BertConfig
(BERT model)bert-generation β
BertGenerationConfig
(Bert Generation model)big_bird β
BigBirdConfig
(BigBird model)bigbird_pegasus β
BigBirdPegasusConfig
(BigBirdPegasus model)blenderbot β
BlenderbotConfig
(Blenderbot model)blenderbot-small β
BlenderbotSmallConfig
(BlenderbotSmall model)camembert β
CamembertConfig
(CamemBERT model)canine β
CanineConfig
(Canine model)clip β
CLIPConfig
(CLIP model)convbert β
ConvBertConfig
(ConvBERT model)ctrl β
CTRLConfig
(CTRL model)deberta β
DebertaConfig
(DeBERTa model)deberta-v2 β
DebertaV2Config
(DeBERTa-v2 model)deit β
DeiTConfig
(DeiT model)detr β
DetrConfig
(DETR model)distilbert β
DistilBertConfig
(DistilBERT model)dpr β
DPRConfig
(DPR model)electra β
ElectraConfig
(ELECTRA model)encoder-decoder β
EncoderDecoderConfig
(Encoder decoder model)flaubert β
FlaubertConfig
(FlauBERT model)fnet β
FNetConfig
(FNet model)fsmt β
FSMTConfig
(FairSeq Machine-Translation model)funnel β
FunnelConfig
(Funnel Transformer model)gpt2 β
GPT2Config
(OpenAI GPT-2 model)gpt_neo β
GPTNeoConfig
(GPT Neo model)gptj β
GPTJConfig
(GPT-J model)hubert β
HubertConfig
(Hubert model)ibert β
IBertConfig
(I-BERT model)layoutlm β
LayoutLMConfig
(LayoutLM model)layoutlmv2 β
LayoutLMv2Config
(LayoutLMv2 model)led β
LEDConfig
(LED model)longformer β
LongformerConfig
(Longformer model)luke β
LukeConfig
(LUKE model)lxmert β
LxmertConfig
(LXMERT model)m2m_100 β
M2M100Config
(M2M100 model)marian β
MarianConfig
(Marian model)mbart β
MBartConfig
(mBART model)megatron-bert β
MegatronBertConfig
(MegatronBert model)mobilebert β
MobileBertConfig
(MobileBERT model)mpnet β
MPNetConfig
(MPNet model)mt5 β
MT5Config
(mT5 model)openai-gpt β
OpenAIGPTConfig
(OpenAI GPT model)pegasus β
PegasusConfig
(Pegasus model)prophetnet β
ProphetNetConfig
(ProphetNet model)rag β
RagConfig
(RAG model)reformer β
ReformerConfig
(Reformer model)rembert β
RemBertConfig
(RemBERT model)retribert β
RetriBertConfig
(RetriBERT model)roberta β
RobertaConfig
(RoBERTa model)roformer β
RoFormerConfig
(RoFormer model)segformer β
SegformerConfig
(SegFormer model)sew β
SEWConfig
(SEW model)sew-d β
SEWDConfig
(SEW-D model)speech-encoder-decoder β
SpeechEncoderDecoderConfig
(Speech Encoder decoder model)speech_to_text β
Speech2TextConfig
(Speech2Text model)speech_to_text_2 β
Speech2Text2Config
(Speech2Text2 model)splinter β
SplinterConfig
(Splinter model)squeezebert β
SqueezeBertConfig
(SqueezeBERT model)t5 β
T5Config
(T5 model)tapas β
TapasConfig
(TAPAS model)transfo-xl β
TransfoXLConfig
(Transformer-XL model)trocr β
TrOCRConfig
(TrOCR model)unispeech β
UniSpeechConfig
(UniSpeech model)unispeech-sat β
UniSpeechSatConfig
(UniSpeechSat model)vision-encoder-decoder β
VisionEncoderDecoderConfig
(Vision Encoder decoder model)visual_bert β
VisualBertConfig
(VisualBert model)vit β
ViTConfig
(ViT model)wav2vec2 β
Wav2Vec2Config
(Wav2Vec2 model)xlm β
XLMConfig
(XLM model)xlm-prophetnet β
XLMProphetNetConfig
(XLMProphetNet model)xlm-roberta β
XLMRobertaConfig
(XLM-RoBERTa model)xlnet β
XLNetConfig
(XLNet model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing a configuration file saved using the
save_pretrained()
method, or thesave_pretrained()
method, e.g.,./my_model_directory/
.A path or url to a saved configuration JSON file, e.g.,
./my_model_directory/configuration.json
.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.return_unused_kwargs (
bool
, optional, defaults toFalse
) βIf
False
, then this function returns just the final configuration object.If
True
, then this functions returns aTuple(config, unused_kwargs)
where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part ofkwargs
which has not been used to updateconfig
and is otherwise ignored.kwargs (additional keyword arguments, optional) β The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the
return_unused_kwargs
keyword parameter.
Examples:
>>> from transformers import AutoConfig >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-uncased') >>> # Download configuration from huggingface.co (user-uploaded) and cache. >>> config = AutoConfig.from_pretrained('dbmdz/bert-base-german-cased') >>> # If configuration file is in a directory (e.g., was saved using `save_pretrained('./test/saved_model/')`). >>> config = AutoConfig.from_pretrained('./test/bert_saved_model/') >>> # Load a specific configuration file. >>> config = AutoConfig.from_pretrained('./test/bert_saved_model/my_configuration.json') >>> # Change some config attributes when loading a pretrained config. >>> config = AutoConfig.from_pretrained('bert-base-uncased', output_attentions=True, foo=False) >>> config.output_attentions True >>> config, unused_kwargs = AutoConfig.from_pretrained('bert-base-uncased', output_attentions=True, foo=False, return_unused_kwargs=True) >>> config.output_attentions True >>> config.unused_kwargs {'foo': False}
-
static
register
(model_type, config)[source]ΒΆ Register a new configuration for this class.
- Parameters
model_type (
str
) β The model type like βbertβ or βgptβ.config (
PretrainedConfig
) β The config to register.
-
classmethod
AutoTokenizerΒΆ
-
class
transformers.
AutoTokenizer
[source]ΒΆ This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when created with the
AutoTokenizer.from_pretrained()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_pretrained
(pretrained_model_name_or_path, *inputs, **kwargs)[source]ΒΆ Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary.
The tokenizer class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
AlbertTokenizer
orAlbertTokenizerFast
(ALBERT model)bart β
BartTokenizer
orBartTokenizerFast
(BART model)barthez β
BarthezTokenizer
orBarthezTokenizerFast
(BARThez model)bartpho β
BartphoTokenizer
(BARTpho model)bert β
BertTokenizer
orBertTokenizerFast
(BERT model)bert-generation β
BertGenerationTokenizer
(Bert Generation model)bert-japanese β
BertJapaneseTokenizer
(BertJapanese model)bertweet β
BertweetTokenizer
(Bertweet model)big_bird β
BigBirdTokenizer
orBigBirdTokenizerFast
(BigBird model)bigbird_pegasus β
PegasusTokenizer
orPegasusTokenizerFast
(BigBirdPegasus model)blenderbot β
BlenderbotTokenizer
(Blenderbot model)blenderbot-small β
BlenderbotSmallTokenizer
(BlenderbotSmall model)byt5 β
ByT5Tokenizer
(ByT5 model)camembert β
CamembertTokenizer
orCamembertTokenizerFast
(CamemBERT model)canine β
CanineTokenizer
(Canine model)clip β
CLIPTokenizer
orCLIPTokenizerFast
(CLIP model)convbert β
ConvBertTokenizer
orConvBertTokenizerFast
(ConvBERT model)cpm β
CpmTokenizer
orCpmTokenizerFast
(CPM model)ctrl β
CTRLTokenizer
(CTRL model)deberta β
DebertaTokenizer
orDebertaTokenizerFast
(DeBERTa model)deberta-v2 β
DebertaV2Tokenizer
(DeBERTa-v2 model)distilbert β
DistilBertTokenizer
orDistilBertTokenizerFast
(DistilBERT model)dpr β
DPRQuestionEncoderTokenizer
orDPRQuestionEncoderTokenizerFast
(DPR model)electra β
ElectraTokenizer
orElectraTokenizerFast
(ELECTRA model)flaubert β
FlaubertTokenizer
(FlauBERT model)fnet β
FNetTokenizer
orFNetTokenizerFast
(FNet model)fsmt β
FSMTTokenizer
(FairSeq Machine-Translation model)funnel β
FunnelTokenizer
orFunnelTokenizerFast
(Funnel Transformer model)gpt2 β
GPT2Tokenizer
orGPT2TokenizerFast
(OpenAI GPT-2 model)gpt_neo β
GPT2Tokenizer
orGPT2TokenizerFast
(GPT Neo model)hubert β
Wav2Vec2CTCTokenizer
(Hubert model)ibert β
RobertaTokenizer
orRobertaTokenizerFast
(I-BERT model)layoutlm β
LayoutLMTokenizer
orLayoutLMTokenizerFast
(LayoutLM model)layoutlmv2 β
LayoutLMv2Tokenizer
orLayoutLMv2TokenizerFast
(LayoutLMv2 model)led β
LEDTokenizer
orLEDTokenizerFast
(LED model)longformer β
LongformerTokenizer
orLongformerTokenizerFast
(Longformer model)luke β
LukeTokenizer
(LUKE model)lxmert β
LxmertTokenizer
orLxmertTokenizerFast
(LXMERT model)m2m_100 β
M2M100Tokenizer
(M2M100 model)marian β
MarianTokenizer
(Marian model)mbart β
MBartTokenizer
orMBartTokenizerFast
(mBART model)mbart50 β
MBart50Tokenizer
orMBart50TokenizerFast
(mBART-50 model)mobilebert β
MobileBertTokenizer
orMobileBertTokenizerFast
(MobileBERT model)mpnet β
MPNetTokenizer
orMPNetTokenizerFast
(MPNet model)mt5 β
MT5Tokenizer
orMT5TokenizerFast
(mT5 model)openai-gpt β
OpenAIGPTTokenizer
orOpenAIGPTTokenizerFast
(OpenAI GPT model)pegasus β
PegasusTokenizer
orPegasusTokenizerFast
(Pegasus model)phobert β
PhobertTokenizer
(PhoBERT model)prophetnet β
ProphetNetTokenizer
(ProphetNet model)rag β
RagTokenizer
(RAG model)reformer β
ReformerTokenizer
orReformerTokenizerFast
(Reformer model)rembert β
RemBertTokenizer
orRemBertTokenizerFast
(RemBERT model)retribert β
RetriBertTokenizer
orRetriBertTokenizerFast
(RetriBERT model)roberta β
RobertaTokenizer
orRobertaTokenizerFast
(RoBERTa model)roformer β
RoFormerTokenizer
orRoFormerTokenizerFast
(RoFormer model)speech_to_text β
Speech2TextTokenizer
(Speech2Text model)speech_to_text_2 β
Speech2Text2Tokenizer
(Speech2Text2 model)splinter β
SplinterTokenizer
orSplinterTokenizerFast
(Splinter model)squeezebert β
SqueezeBertTokenizer
orSqueezeBertTokenizerFast
(SqueezeBERT model)t5 β
T5Tokenizer
orT5TokenizerFast
(T5 model)tapas β
TapasTokenizer
(TAPAS model)transfo-xl β
TransfoXLTokenizer
(Transformer-XL model)wav2vec2 β
Wav2Vec2CTCTokenizer
(Wav2Vec2 model)xlm β
XLMTokenizer
(XLM model)xlm-prophetnet β
XLMProphetNetTokenizer
(XLMProphetNet model)xlm-roberta β
XLMRobertaTokenizer
orXLMRobertaTokenizerFast
(XLM-RoBERTa model)xlnet β
XLNetTokenizer
orXLNetTokenizerFast
(XLNet model)
- Params:
- pretrained_model_name_or_path (
str
oros.PathLike
): Can be either:
A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the
save_pretrained()
method, e.g.,./my_model_directory/
.A path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (like Bert or XLNet), e.g.:
./my_model_directory/vocab.txt
. (Not applicable to all derived classes)
- inputs (additional positional arguments, optional):
Will be passed along to the Tokenizer
__init__()
method.- config (
PretrainedConfig
, optional) The configuration object used to dertermine the tokenizer class to instantiate.
- cache_dir (
str
oros.PathLike
, optional): Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
- force_download (
bool
, optional, defaults toFalse
): Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist.
- resume_download (
bool
, optional, defaults toFalse
): Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.
- proxies (
Dict[str, str]
, optional): A dictionary of proxy servers to use by protocol or endpoint, e.g.,
{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.- revision(
str
, optional, defaults to"main"
): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so
revision
can be any identifier allowed by git.- subfolder (
str
, optional): In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here.
- use_fast (
bool
, optional, defaults toTrue
): Whether or not to try to load the fast version of the tokenizer.
- tokenizer_type (
str
, optional): Tokenizer type to be loaded.
- kwargs (additional keyword arguments, optional):
Will be passed to the Tokenizer
__init__()
method. Can be used to set special tokens likebos_token
,eos_token
,unk_token
,sep_token
,pad_token
,cls_token
,mask_token
,additional_special_tokens
. See parameters in the__init__()
for more details.
- pretrained_model_name_or_path (
Examples:
>>> from transformers import AutoTokenizer >>> # Download vocabulary from huggingface.co and cache. >>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') >>> # Download vocabulary from huggingface.co (user-uploaded) and cache. >>> tokenizer = AutoTokenizer.from_pretrained('dbmdz/bert-base-german-cased') >>> # If vocabulary files are in a directory (e.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`) >>> tokenizer = AutoTokenizer.from_pretrained('./test/bert_saved_model/')
-
register
(slow_tokenizer_class=None, fast_tokenizer_class=None)[source]ΒΆ Register a new tokenizer in this mapping.
- Parameters
config_class (
PretrainedConfig
) β The configuration corresponding to the model to register.slow_tokenizer_class (
PretrainedTokenizerFast
, optional) β The slow tokenizer to register.slow_tokenizer_class β The fast tokenizer to register.
-
classmethod
AutoFeatureExtractorΒΆ
-
class
transformers.
AutoFeatureExtractor
[source]ΒΆ This is a generic feature extractor class that will be instantiated as one of the feature extractor classes of the library when created with the
AutoFeatureExtractor.from_pretrained()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_pretrained
(pretrained_model_name_or_path, **kwargs)[source]ΒΆ Instantiate one of the feature extractor classes of the library from a pretrained model vocabulary.
The tokenizer class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:beit β
BeitFeatureExtractor
(BEiT model)clip β
CLIPFeatureExtractor
(CLIP model)deit β
DeiTFeatureExtractor
(DeiT model)detr β
DetrFeatureExtractor
(DETR model)hubert β
Wav2Vec2FeatureExtractor
(Hubert model)layoutlmv2 β
LayoutLMv2FeatureExtractor
(LayoutLMv2 model)speech_to_text β
Speech2TextFeatureExtractor
(Speech2Text model)vit β
ViTFeatureExtractor
(ViT model)wav2vec2 β
Wav2Vec2FeatureExtractor
(Wav2Vec2 model)
- Params:
- pretrained_model_name_or_path (
str
oros.PathLike
): This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.a path to a directory containing a feature extractor file saved using the
save_pretrained()
method, e.g.,./my_model_directory/
.a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json
.
- cache_dir (
str
oros.PathLike
, optional): Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used.
- force_download (
bool
, optional, defaults toFalse
): Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist.
- resume_download (
bool
, optional, defaults toFalse
): Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.
- proxies (
Dict[str, str]
, optional): A dictionary of proxy servers to use by protocol or endpoint, e.g.,
{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request.- use_auth_token (
str
or bool, optional): The token to use as HTTP bearer authorization for remote files. If
True
, will use the token generated when runningtransformers-cli login
(stored inhuggingface
).- revision(
str
, optional, defaults to"main"
): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so
revision
can be any identifier allowed by git.- return_unused_kwargs (
bool
, optional, defaults toFalse
): If
False
, then this function returns just the final feature extractor object. IfTrue
, then this functions returns aTuple(feature_extractor, unused_kwargs)
where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part ofkwargs
which has not been used to updatefeature_extractor
and is otherwise ignored.- kwargs (
Dict[str, Any]
, optional): The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is controlled by the
return_unused_kwargs
keyword parameter.
- pretrained_model_name_or_path (
Note
Passing
use_auth_token=True
is required when you want to use a private model.Examples:
>>> from transformers import AutoFeatureExtractor >>> # Download vocabulary from huggingface.co and cache. >>> feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/wav2vec2-base-960h') >>> # If vocabulary files are in a directory (e.g. feature extractor was saved using `save_pretrained('./test/saved_model/')`) >>> feature_extractor = AutoFeatureExtractor.from_pretrained('./test/saved_model/')
-
classmethod
AutoModelΒΆ
-
class
transformers.
AutoModel
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the base model classes of the library when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the base model classes of the library from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:AlbertModel
(ALBERT model)BartConfig
configuration class:BartModel
(BART model)BeitConfig
configuration class:BeitModel
(BEiT model)BertConfig
configuration class:BertModel
(BERT model)BertGenerationConfig
configuration class:BertGenerationEncoder
(Bert Generation model)BigBirdConfig
configuration class:BigBirdModel
(BigBird model)BigBirdPegasusConfig
configuration class:BigBirdPegasusModel
(BigBirdPegasus model)BlenderbotConfig
configuration class:BlenderbotModel
(Blenderbot model)BlenderbotSmallConfig
configuration class:BlenderbotSmallModel
(BlenderbotSmall model)CLIPConfig
configuration class:CLIPModel
(CLIP model)CTRLConfig
configuration class:CTRLModel
(CTRL model)CamembertConfig
configuration class:CamembertModel
(CamemBERT model)CanineConfig
configuration class:CanineModel
(Canine model)ConvBertConfig
configuration class:ConvBertModel
(ConvBERT model)DPRConfig
configuration class:DPRQuestionEncoder
(DPR model)DebertaConfig
configuration class:DebertaModel
(DeBERTa model)DebertaV2Config
configuration class:DebertaV2Model
(DeBERTa-v2 model)DeiTConfig
configuration class:DeiTModel
(DeiT model)DetrConfig
configuration class:DetrModel
(DETR model)DistilBertConfig
configuration class:DistilBertModel
(DistilBERT model)ElectraConfig
configuration class:ElectraModel
(ELECTRA model)FNetConfig
configuration class:FNetModel
(FNet model)FSMTConfig
configuration class:FSMTModel
(FairSeq Machine-Translation model)FlaubertConfig
configuration class:FlaubertModel
(FlauBERT model)FunnelConfig
configuration class:FunnelModel
orFunnelBaseModel
(Funnel Transformer model)GPT2Config
configuration class:GPT2Model
(OpenAI GPT-2 model)GPTJConfig
configuration class:GPTJModel
(GPT-J model)GPTNeoConfig
configuration class:GPTNeoModel
(GPT Neo model)HubertConfig
configuration class:HubertModel
(Hubert model)IBertConfig
configuration class:IBertModel
(I-BERT model)LayoutLMConfig
configuration class:LayoutLMModel
(LayoutLM model)LayoutLMv2Config
configuration class:LayoutLMv2Model
(LayoutLMv2 model)LongformerConfig
configuration class:LongformerModel
(Longformer model)LukeConfig
configuration class:LukeModel
(LUKE model)LxmertConfig
configuration class:LxmertModel
(LXMERT model)M2M100Config
configuration class:M2M100Model
(M2M100 model)MBartConfig
configuration class:MBartModel
(mBART model)MPNetConfig
configuration class:MPNetModel
(MPNet model)MarianConfig
configuration class:MarianModel
(Marian model)MegatronBertConfig
configuration class:MegatronBertModel
(MegatronBert model)MobileBertConfig
configuration class:MobileBertModel
(MobileBERT model)OpenAIGPTConfig
configuration class:OpenAIGPTModel
(OpenAI GPT model)PegasusConfig
configuration class:PegasusModel
(Pegasus model)ProphetNetConfig
configuration class:ProphetNetModel
(ProphetNet model)ReformerConfig
configuration class:ReformerModel
(Reformer model)RemBertConfig
configuration class:RemBertModel
(RemBERT model)RetriBertConfig
configuration class:RetriBertModel
(RetriBERT model)RoFormerConfig
configuration class:RoFormerModel
(RoFormer model)RobertaConfig
configuration class:RobertaModel
(RoBERTa model)SEWDConfig
configuration class:SEWDModel
(SEW-D model)SegformerConfig
configuration class:SegformerModel
(SegFormer model)Speech2TextConfig
configuration class:Speech2TextModel
(Speech2Text model)SplinterConfig
configuration class:SplinterModel
(Splinter model)SqueezeBertConfig
configuration class:SqueezeBertModel
(SqueezeBERT model)TapasConfig
configuration class:TapasModel
(TAPAS model)TransfoXLConfig
configuration class:TransfoXLModel
(Transformer-XL model)UniSpeechConfig
configuration class:UniSpeechModel
(UniSpeech model)UniSpeechSatConfig
configuration class:UniSpeechSatModel
(UniSpeechSat model)VisualBertConfig
configuration class:VisualBertModel
(VisualBert model)Wav2Vec2Config
configuration class:Wav2Vec2Model
(Wav2Vec2 model)XLMProphetNetConfig
configuration class:XLMProphetNetModel
(XLMProphetNet model)XLMRobertaConfig
configuration class:XLMRobertaModel
(XLM-RoBERTa model)XLNetConfig
configuration class:XLNetModel
(XLNet model)
Examples:
>>> from transformers import AutoConfig, AutoModel >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModel.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
AlbertModel
(ALBERT model)bart β
BartModel
(BART model)beit β
BeitModel
(BEiT model)bert β
BertModel
(BERT model)bert-generation β
BertGenerationEncoder
(Bert Generation model)big_bird β
BigBirdModel
(BigBird model)bigbird_pegasus β
BigBirdPegasusModel
(BigBirdPegasus model)blenderbot β
BlenderbotModel
(Blenderbot model)blenderbot-small β
BlenderbotSmallModel
(BlenderbotSmall model)camembert β
CamembertModel
(CamemBERT model)canine β
CanineModel
(Canine model)clip β
CLIPModel
(CLIP model)convbert β
ConvBertModel
(ConvBERT model)ctrl β
CTRLModel
(CTRL model)deberta β
DebertaModel
(DeBERTa model)deberta-v2 β
DebertaV2Model
(DeBERTa-v2 model)deit β
DeiTModel
(DeiT model)detr β
DetrModel
(DETR model)distilbert β
DistilBertModel
(DistilBERT model)dpr β
DPRQuestionEncoder
(DPR model)electra β
ElectraModel
(ELECTRA model)flaubert β
FlaubertModel
(FlauBERT model)fnet β
FNetModel
(FNet model)fsmt β
FSMTModel
(FairSeq Machine-Translation model)funnel β
FunnelModel
orFunnelBaseModel
(Funnel Transformer model)gpt2 β
GPT2Model
(OpenAI GPT-2 model)gpt_neo β
GPTNeoModel
(GPT Neo model)gptj β
GPTJModel
(GPT-J model)hubert β
HubertModel
(Hubert model)ibert β
IBertModel
(I-BERT model)layoutlm β
LayoutLMModel
(LayoutLM model)layoutlmv2 β
LayoutLMv2Model
(LayoutLMv2 model)led β
LEDModel
(LED model)longformer β
LongformerModel
(Longformer model)luke β
LukeModel
(LUKE model)lxmert β
LxmertModel
(LXMERT model)m2m_100 β
M2M100Model
(M2M100 model)marian β
MarianModel
(Marian model)mbart β
MBartModel
(mBART model)megatron-bert β
MegatronBertModel
(MegatronBert model)mobilebert β
MobileBertModel
(MobileBERT model)mpnet β
MPNetModel
(MPNet model)mt5 β
MT5Model
(mT5 model)openai-gpt β
OpenAIGPTModel
(OpenAI GPT model)pegasus β
PegasusModel
(Pegasus model)prophetnet β
ProphetNetModel
(ProphetNet model)reformer β
ReformerModel
(Reformer model)rembert β
RemBertModel
(RemBERT model)retribert β
RetriBertModel
(RetriBERT model)roberta β
RobertaModel
(RoBERTa model)roformer β
RoFormerModel
(RoFormer model)segformer β
SegformerModel
(SegFormer model)sew β
SEWModel
(SEW model)sew-d β
SEWDModel
(SEW-D model)speech_to_text β
Speech2TextModel
(Speech2Text model)splinter β
SplinterModel
(Splinter model)squeezebert β
SqueezeBertModel
(SqueezeBERT model)t5 β
T5Model
(T5 model)tapas β
TapasModel
(TAPAS model)transfo-xl β
TransfoXLModel
(Transformer-XL model)unispeech β
UniSpeechModel
(UniSpeech model)unispeech-sat β
UniSpeechSatModel
(UniSpeechSat model)visual_bert β
VisualBertModel
(VisualBert model)vit β
ViTModel
(ViT model)wav2vec2 β
Wav2Vec2Model
(Wav2Vec2 model)xlm β
XLMModel
(XLM model)xlm-prophetnet β
XLMProphetNetModel
(XLMProphetNet model)xlm-roberta β
XLMRobertaModel
(XLM-RoBERTa model)xlnet β
XLNetModel
(XLNet model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModel >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModel.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModel.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModel.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForPreTrainingΒΆ
-
class
transformers.
AutoModelForPreTraining
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:AlbertForPreTraining
(ALBERT model)BartConfig
configuration class:BartForConditionalGeneration
(BART model)BertConfig
configuration class:BertForPreTraining
(BERT model)BigBirdConfig
configuration class:BigBirdForPreTraining
(BigBird model)CTRLConfig
configuration class:CTRLLMHeadModel
(CTRL model)CamembertConfig
configuration class:CamembertForMaskedLM
(CamemBERT model)DebertaConfig
configuration class:DebertaForMaskedLM
(DeBERTa model)DebertaV2Config
configuration class:DebertaV2ForMaskedLM
(DeBERTa-v2 model)DistilBertConfig
configuration class:DistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class:ElectraForPreTraining
(ELECTRA model)FNetConfig
configuration class:FNetForPreTraining
(FNet model)FSMTConfig
configuration class:FSMTForConditionalGeneration
(FairSeq Machine-Translation model)FlaubertConfig
configuration class:FlaubertWithLMHeadModel
(FlauBERT model)FunnelConfig
configuration class:FunnelForPreTraining
(Funnel Transformer model)GPT2Config
configuration class:GPT2LMHeadModel
(OpenAI GPT-2 model)IBertConfig
configuration class:IBertForMaskedLM
(I-BERT model)LayoutLMConfig
configuration class:LayoutLMForMaskedLM
(LayoutLM model)LongformerConfig
configuration class:LongformerForMaskedLM
(Longformer model)LxmertConfig
configuration class:LxmertForPreTraining
(LXMERT model)MPNetConfig
configuration class:MPNetForMaskedLM
(MPNet model)MegatronBertConfig
configuration class:MegatronBertForPreTraining
(MegatronBert model)MobileBertConfig
configuration class:MobileBertForPreTraining
(MobileBERT model)OpenAIGPTConfig
configuration class:OpenAIGPTLMHeadModel
(OpenAI GPT model)RetriBertConfig
configuration class:RetriBertModel
(RetriBERT model)RobertaConfig
configuration class:RobertaForMaskedLM
(RoBERTa model)SqueezeBertConfig
configuration class:SqueezeBertForMaskedLM
(SqueezeBERT model)T5Config
configuration class:T5ForConditionalGeneration
(T5 model)TapasConfig
configuration class:TapasForMaskedLM
(TAPAS model)TransfoXLConfig
configuration class:TransfoXLLMHeadModel
(Transformer-XL model)UniSpeechConfig
configuration class:UniSpeechForPreTraining
(UniSpeech model)UniSpeechSatConfig
configuration class:UniSpeechSatForPreTraining
(UniSpeechSat model)VisualBertConfig
configuration class:VisualBertForPreTraining
(VisualBert model)Wav2Vec2Config
configuration class:Wav2Vec2ForPreTraining
(Wav2Vec2 model)XLMConfig
configuration class:XLMWithLMHeadModel
(XLM model)XLMRobertaConfig
configuration class:XLMRobertaForMaskedLM
(XLM-RoBERTa model)XLNetConfig
configuration class:XLNetLMHeadModel
(XLNet model)
Examples:
>>> from transformers import AutoConfig, AutoModelForPreTraining >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForPreTraining.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
AlbertForPreTraining
(ALBERT model)bart β
BartForConditionalGeneration
(BART model)bert β
BertForPreTraining
(BERT model)big_bird β
BigBirdForPreTraining
(BigBird model)camembert β
CamembertForMaskedLM
(CamemBERT model)ctrl β
CTRLLMHeadModel
(CTRL model)deberta β
DebertaForMaskedLM
(DeBERTa model)deberta-v2 β
DebertaV2ForMaskedLM
(DeBERTa-v2 model)distilbert β
DistilBertForMaskedLM
(DistilBERT model)electra β
ElectraForPreTraining
(ELECTRA model)flaubert β
FlaubertWithLMHeadModel
(FlauBERT model)fnet β
FNetForPreTraining
(FNet model)fsmt β
FSMTForConditionalGeneration
(FairSeq Machine-Translation model)funnel β
FunnelForPreTraining
(Funnel Transformer model)gpt2 β
GPT2LMHeadModel
(OpenAI GPT-2 model)ibert β
IBertForMaskedLM
(I-BERT model)layoutlm β
LayoutLMForMaskedLM
(LayoutLM model)longformer β
LongformerForMaskedLM
(Longformer model)lxmert β
LxmertForPreTraining
(LXMERT model)megatron-bert β
MegatronBertForPreTraining
(MegatronBert model)mobilebert β
MobileBertForPreTraining
(MobileBERT model)mpnet β
MPNetForMaskedLM
(MPNet model)openai-gpt β
OpenAIGPTLMHeadModel
(OpenAI GPT model)retribert β
RetriBertModel
(RetriBERT model)roberta β
RobertaForMaskedLM
(RoBERTa model)squeezebert β
SqueezeBertForMaskedLM
(SqueezeBERT model)t5 β
T5ForConditionalGeneration
(T5 model)tapas β
TapasForMaskedLM
(TAPAS model)transfo-xl β
TransfoXLLMHeadModel
(Transformer-XL model)unispeech β
UniSpeechForPreTraining
(UniSpeech model)unispeech-sat β
UniSpeechSatForPreTraining
(UniSpeechSat model)visual_bert β
VisualBertForPreTraining
(VisualBert model)wav2vec2 β
Wav2Vec2ForPreTraining
(Wav2Vec2 model)xlm β
XLMWithLMHeadModel
(XLM model)xlm-roberta β
XLMRobertaForMaskedLM
(XLM-RoBERTa model)xlnet β
XLNetLMHeadModel
(XLNet model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForPreTraining >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForPreTraining.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForPreTraining.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForPreTraining.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForCausalLMΒΆ
-
class
transformers.
AutoModelForCausalLM
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
BartConfig
configuration class:BartForCausalLM
(BART model)BertConfig
configuration class:BertLMHeadModel
(BERT model)BertGenerationConfig
configuration class:BertGenerationDecoder
(Bert Generation model)BigBirdConfig
configuration class:BigBirdForCausalLM
(BigBird model)BigBirdPegasusConfig
configuration class:BigBirdPegasusForCausalLM
(BigBirdPegasus model)BlenderbotConfig
configuration class:BlenderbotForCausalLM
(Blenderbot model)BlenderbotSmallConfig
configuration class:BlenderbotSmallForCausalLM
(BlenderbotSmall model)CTRLConfig
configuration class:CTRLLMHeadModel
(CTRL model)CamembertConfig
configuration class:CamembertForCausalLM
(CamemBERT model)GPT2Config
configuration class:GPT2LMHeadModel
(OpenAI GPT-2 model)GPTJConfig
configuration class:GPTJForCausalLM
(GPT-J model)GPTNeoConfig
configuration class:GPTNeoForCausalLM
(GPT Neo model)MBartConfig
configuration class:MBartForCausalLM
(mBART model)MarianConfig
configuration class:MarianForCausalLM
(Marian model)MegatronBertConfig
configuration class:MegatronBertForCausalLM
(MegatronBert model)OpenAIGPTConfig
configuration class:OpenAIGPTLMHeadModel
(OpenAI GPT model)PegasusConfig
configuration class:PegasusForCausalLM
(Pegasus model)ProphetNetConfig
configuration class:ProphetNetForCausalLM
(ProphetNet model)ReformerConfig
configuration class:ReformerModelWithLMHead
(Reformer model)RemBertConfig
configuration class:RemBertForCausalLM
(RemBERT model)RoFormerConfig
configuration class:RoFormerForCausalLM
(RoFormer model)RobertaConfig
configuration class:RobertaForCausalLM
(RoBERTa model)Speech2Text2Config
configuration class:Speech2Text2ForCausalLM
(Speech2Text2 model)TrOCRConfig
configuration class:TrOCRForCausalLM
(TrOCR model)TransfoXLConfig
configuration class:TransfoXLLMHeadModel
(Transformer-XL model)XLMConfig
configuration class:XLMWithLMHeadModel
(XLM model)XLMProphetNetConfig
configuration class:XLMProphetNetForCausalLM
(XLMProphetNet model)XLMRobertaConfig
configuration class:XLMRobertaForCausalLM
(XLM-RoBERTa model)XLNetConfig
configuration class:XLNetLMHeadModel
(XLNet model)
Examples:
>>> from transformers import AutoConfig, AutoModelForCausalLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForCausalLM.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:bart β
BartForCausalLM
(BART model)bert β
BertLMHeadModel
(BERT model)bert-generation β
BertGenerationDecoder
(Bert Generation model)big_bird β
BigBirdForCausalLM
(BigBird model)bigbird_pegasus β
BigBirdPegasusForCausalLM
(BigBirdPegasus model)blenderbot β
BlenderbotForCausalLM
(Blenderbot model)blenderbot-small β
BlenderbotSmallForCausalLM
(BlenderbotSmall model)camembert β
CamembertForCausalLM
(CamemBERT model)ctrl β
CTRLLMHeadModel
(CTRL model)gpt2 β
GPT2LMHeadModel
(OpenAI GPT-2 model)gpt_neo β
GPTNeoForCausalLM
(GPT Neo model)gptj β
GPTJForCausalLM
(GPT-J model)marian β
MarianForCausalLM
(Marian model)mbart β
MBartForCausalLM
(mBART model)megatron-bert β
MegatronBertForCausalLM
(MegatronBert model)openai-gpt β
OpenAIGPTLMHeadModel
(OpenAI GPT model)pegasus β
PegasusForCausalLM
(Pegasus model)prophetnet β
ProphetNetForCausalLM
(ProphetNet model)reformer β
ReformerModelWithLMHead
(Reformer model)rembert β
RemBertForCausalLM
(RemBERT model)roberta β
RobertaForCausalLM
(RoBERTa model)roformer β
RoFormerForCausalLM
(RoFormer model)speech_to_text_2 β
Speech2Text2ForCausalLM
(Speech2Text2 model)transfo-xl β
TransfoXLLMHeadModel
(Transformer-XL model)trocr β
TrOCRForCausalLM
(TrOCR model)xlm β
XLMWithLMHeadModel
(XLM model)xlm-prophetnet β
XLMProphetNetForCausalLM
(XLMProphetNet model)xlm-roberta β
XLMRobertaForCausalLM
(XLM-RoBERTa model)xlnet β
XLNetLMHeadModel
(XLNet model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForCausalLM >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForCausalLM.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForCausalLM.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForCausalLM.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForMaskedLMΒΆ
-
class
transformers.
AutoModelForMaskedLM
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:AlbertForMaskedLM
(ALBERT model)BartConfig
configuration class:BartForConditionalGeneration
(BART model)BertConfig
configuration class:BertForMaskedLM
(BERT model)BigBirdConfig
configuration class:BigBirdForMaskedLM
(BigBird model)CamembertConfig
configuration class:CamembertForMaskedLM
(CamemBERT model)ConvBertConfig
configuration class:ConvBertForMaskedLM
(ConvBERT model)DebertaConfig
configuration class:DebertaForMaskedLM
(DeBERTa model)DebertaV2Config
configuration class:DebertaV2ForMaskedLM
(DeBERTa-v2 model)DistilBertConfig
configuration class:DistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class:ElectraForMaskedLM
(ELECTRA model)FNetConfig
configuration class:FNetForMaskedLM
(FNet model)FlaubertConfig
configuration class:FlaubertWithLMHeadModel
(FlauBERT model)FunnelConfig
configuration class:FunnelForMaskedLM
(Funnel Transformer model)IBertConfig
configuration class:IBertForMaskedLM
(I-BERT model)LayoutLMConfig
configuration class:LayoutLMForMaskedLM
(LayoutLM model)LongformerConfig
configuration class:LongformerForMaskedLM
(Longformer model)MBartConfig
configuration class:MBartForConditionalGeneration
(mBART model)MPNetConfig
configuration class:MPNetForMaskedLM
(MPNet model)MegatronBertConfig
configuration class:MegatronBertForMaskedLM
(MegatronBert model)MobileBertConfig
configuration class:MobileBertForMaskedLM
(MobileBERT model)ReformerConfig
configuration class:ReformerForMaskedLM
(Reformer model)RemBertConfig
configuration class:RemBertForMaskedLM
(RemBERT model)RoFormerConfig
configuration class:RoFormerForMaskedLM
(RoFormer model)RobertaConfig
configuration class:RobertaForMaskedLM
(RoBERTa model)SqueezeBertConfig
configuration class:SqueezeBertForMaskedLM
(SqueezeBERT model)TapasConfig
configuration class:TapasForMaskedLM
(TAPAS model)Wav2Vec2Config
configuration class:Wav2Vec2ForMaskedLM
(Wav2Vec2 model)XLMConfig
configuration class:XLMWithLMHeadModel
(XLM model)XLMRobertaConfig
configuration class:XLMRobertaForMaskedLM
(XLM-RoBERTa model)
Examples:
>>> from transformers import AutoConfig, AutoModelForMaskedLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForMaskedLM.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
AlbertForMaskedLM
(ALBERT model)bart β
BartForConditionalGeneration
(BART model)bert β
BertForMaskedLM
(BERT model)big_bird β
BigBirdForMaskedLM
(BigBird model)camembert β
CamembertForMaskedLM
(CamemBERT model)convbert β
ConvBertForMaskedLM
(ConvBERT model)deberta β
DebertaForMaskedLM
(DeBERTa model)deberta-v2 β
DebertaV2ForMaskedLM
(DeBERTa-v2 model)distilbert β
DistilBertForMaskedLM
(DistilBERT model)electra β
ElectraForMaskedLM
(ELECTRA model)flaubert β
FlaubertWithLMHeadModel
(FlauBERT model)fnet β
FNetForMaskedLM
(FNet model)funnel β
FunnelForMaskedLM
(Funnel Transformer model)ibert β
IBertForMaskedLM
(I-BERT model)layoutlm β
LayoutLMForMaskedLM
(LayoutLM model)longformer β
LongformerForMaskedLM
(Longformer model)mbart β
MBartForConditionalGeneration
(mBART model)megatron-bert β
MegatronBertForMaskedLM
(MegatronBert model)mobilebert β
MobileBertForMaskedLM
(MobileBERT model)mpnet β
MPNetForMaskedLM
(MPNet model)reformer β
ReformerForMaskedLM
(Reformer model)rembert β
RemBertForMaskedLM
(RemBERT model)roberta β
RobertaForMaskedLM
(RoBERTa model)roformer β
RoFormerForMaskedLM
(RoFormer model)squeezebert β
SqueezeBertForMaskedLM
(SqueezeBERT model)tapas β
TapasForMaskedLM
(TAPAS model)wav2vec2 β
Wav2Vec2ForMaskedLM
(Wav2Vec2 model)xlm β
XLMWithLMHeadModel
(XLM model)xlm-roberta β
XLMRobertaForMaskedLM
(XLM-RoBERTa model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForMaskedLM >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForMaskedLM.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForMaskedLM.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForMaskedLM.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForSeq2SeqLMΒΆ
-
class
transformers.
AutoModelForSeq2SeqLM
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
BartConfig
configuration class:BartForConditionalGeneration
(BART model)BigBirdPegasusConfig
configuration class:BigBirdPegasusForConditionalGeneration
(BigBirdPegasus model)BlenderbotConfig
configuration class:BlenderbotForConditionalGeneration
(Blenderbot model)BlenderbotSmallConfig
configuration class:BlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)EncoderDecoderConfig
configuration class:EncoderDecoderModel
(Encoder decoder model)FSMTConfig
configuration class:FSMTForConditionalGeneration
(FairSeq Machine-Translation model)LEDConfig
configuration class:LEDForConditionalGeneration
(LED model)M2M100Config
configuration class:M2M100ForConditionalGeneration
(M2M100 model)MBartConfig
configuration class:MBartForConditionalGeneration
(mBART model)MT5Config
configuration class:MT5ForConditionalGeneration
(mT5 model)MarianConfig
configuration class:MarianMTModel
(Marian model)PegasusConfig
configuration class:PegasusForConditionalGeneration
(Pegasus model)ProphetNetConfig
configuration class:ProphetNetForConditionalGeneration
(ProphetNet model)T5Config
configuration class:T5ForConditionalGeneration
(T5 model)XLMProphetNetConfig
configuration class:XLMProphetNetForConditionalGeneration
(XLMProphetNet model)
Examples:
>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('t5-base') >>> model = AutoModelForSeq2SeqLM.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:bart β
BartForConditionalGeneration
(BART model)bigbird_pegasus β
BigBirdPegasusForConditionalGeneration
(BigBirdPegasus model)blenderbot β
BlenderbotForConditionalGeneration
(Blenderbot model)blenderbot-small β
BlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)encoder-decoder β
EncoderDecoderModel
(Encoder decoder model)fsmt β
FSMTForConditionalGeneration
(FairSeq Machine-Translation model)led β
LEDForConditionalGeneration
(LED model)m2m_100 β
M2M100ForConditionalGeneration
(M2M100 model)marian β
MarianMTModel
(Marian model)mbart β
MBartForConditionalGeneration
(mBART model)mt5 β
MT5ForConditionalGeneration
(mT5 model)pegasus β
PegasusForConditionalGeneration
(Pegasus model)prophetnet β
ProphetNetForConditionalGeneration
(ProphetNet model)t5 β
T5ForConditionalGeneration
(T5 model)xlm-prophetnet β
XLMProphetNetForConditionalGeneration
(XLMProphetNet model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForSeq2SeqLM.from_pretrained('t5-base') >>> # Update configuration during loading >>> model = AutoModelForSeq2SeqLM.from_pretrained('t5-base', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/t5_tf_model_config.json') >>> model = AutoModelForSeq2SeqLM.from_pretrained('./tf_model/t5_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForSequenceClassificationΒΆ
-
class
transformers.
AutoModelForSequenceClassification
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:AlbertForSequenceClassification
(ALBERT model)BartConfig
configuration class:BartForSequenceClassification
(BART model)BertConfig
configuration class:BertForSequenceClassification
(BERT model)BigBirdConfig
configuration class:BigBirdForSequenceClassification
(BigBird model)BigBirdPegasusConfig
configuration class:BigBirdPegasusForSequenceClassification
(BigBirdPegasus model)CTRLConfig
configuration class:CTRLForSequenceClassification
(CTRL model)CamembertConfig
configuration class:CamembertForSequenceClassification
(CamemBERT model)CanineConfig
configuration class:CanineForSequenceClassification
(Canine model)ConvBertConfig
configuration class:ConvBertForSequenceClassification
(ConvBERT model)DebertaConfig
configuration class:DebertaForSequenceClassification
(DeBERTa model)DebertaV2Config
configuration class:DebertaV2ForSequenceClassification
(DeBERTa-v2 model)DistilBertConfig
configuration class:DistilBertForSequenceClassification
(DistilBERT model)ElectraConfig
configuration class:ElectraForSequenceClassification
(ELECTRA model)FNetConfig
configuration class:FNetForSequenceClassification
(FNet model)FlaubertConfig
configuration class:FlaubertForSequenceClassification
(FlauBERT model)FunnelConfig
configuration class:FunnelForSequenceClassification
(Funnel Transformer model)GPT2Config
configuration class:GPT2ForSequenceClassification
(OpenAI GPT-2 model)GPTJConfig
configuration class:GPTJForSequenceClassification
(GPT-J model)GPTNeoConfig
configuration class:GPTNeoForSequenceClassification
(GPT Neo model)IBertConfig
configuration class:IBertForSequenceClassification
(I-BERT model)LEDConfig
configuration class:LEDForSequenceClassification
(LED model)LayoutLMConfig
configuration class:LayoutLMForSequenceClassification
(LayoutLM model)LayoutLMv2Config
configuration class:LayoutLMv2ForSequenceClassification
(LayoutLMv2 model)LongformerConfig
configuration class:LongformerForSequenceClassification
(Longformer model)MBartConfig
configuration class:MBartForSequenceClassification
(mBART model)MPNetConfig
configuration class:MPNetForSequenceClassification
(MPNet model)MegatronBertConfig
configuration class:MegatronBertForSequenceClassification
(MegatronBert model)MobileBertConfig
configuration class:MobileBertForSequenceClassification
(MobileBERT model)OpenAIGPTConfig
configuration class:OpenAIGPTForSequenceClassification
(OpenAI GPT model)ReformerConfig
configuration class:ReformerForSequenceClassification
(Reformer model)RemBertConfig
configuration class:RemBertForSequenceClassification
(RemBERT model)RoFormerConfig
configuration class:RoFormerForSequenceClassification
(RoFormer model)RobertaConfig
configuration class:RobertaForSequenceClassification
(RoBERTa model)SqueezeBertConfig
configuration class:SqueezeBertForSequenceClassification
(SqueezeBERT model)TapasConfig
configuration class:TapasForSequenceClassification
(TAPAS model)TransfoXLConfig
configuration class:TransfoXLForSequenceClassification
(Transformer-XL model)XLMConfig
configuration class:XLMForSequenceClassification
(XLM model)XLMRobertaConfig
configuration class:XLMRobertaForSequenceClassification
(XLM-RoBERTa model)XLNetConfig
configuration class:XLNetForSequenceClassification
(XLNet model)
Examples:
>>> from transformers import AutoConfig, AutoModelForSequenceClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForSequenceClassification.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
AlbertForSequenceClassification
(ALBERT model)bart β
BartForSequenceClassification
(BART model)bert β
BertForSequenceClassification
(BERT model)big_bird β
BigBirdForSequenceClassification
(BigBird model)bigbird_pegasus β
BigBirdPegasusForSequenceClassification
(BigBirdPegasus model)camembert β
CamembertForSequenceClassification
(CamemBERT model)canine β
CanineForSequenceClassification
(Canine model)convbert β
ConvBertForSequenceClassification
(ConvBERT model)ctrl β
CTRLForSequenceClassification
(CTRL model)deberta β
DebertaForSequenceClassification
(DeBERTa model)deberta-v2 β
DebertaV2ForSequenceClassification
(DeBERTa-v2 model)distilbert β
DistilBertForSequenceClassification
(DistilBERT model)electra β
ElectraForSequenceClassification
(ELECTRA model)flaubert β
FlaubertForSequenceClassification
(FlauBERT model)fnet β
FNetForSequenceClassification
(FNet model)funnel β
FunnelForSequenceClassification
(Funnel Transformer model)gpt2 β
GPT2ForSequenceClassification
(OpenAI GPT-2 model)gpt_neo β
GPTNeoForSequenceClassification
(GPT Neo model)gptj β
GPTJForSequenceClassification
(GPT-J model)ibert β
IBertForSequenceClassification
(I-BERT model)layoutlm β
LayoutLMForSequenceClassification
(LayoutLM model)layoutlmv2 β
LayoutLMv2ForSequenceClassification
(LayoutLMv2 model)led β
LEDForSequenceClassification
(LED model)longformer β
LongformerForSequenceClassification
(Longformer model)mbart β
MBartForSequenceClassification
(mBART model)megatron-bert β
MegatronBertForSequenceClassification
(MegatronBert model)mobilebert β
MobileBertForSequenceClassification
(MobileBERT model)mpnet β
MPNetForSequenceClassification
(MPNet model)openai-gpt β
OpenAIGPTForSequenceClassification
(OpenAI GPT model)reformer β
ReformerForSequenceClassification
(Reformer model)rembert β
RemBertForSequenceClassification
(RemBERT model)roberta β
RobertaForSequenceClassification
(RoBERTa model)roformer β
RoFormerForSequenceClassification
(RoFormer model)squeezebert β
SqueezeBertForSequenceClassification
(SqueezeBERT model)tapas β
TapasForSequenceClassification
(TAPAS model)transfo-xl β
TransfoXLForSequenceClassification
(Transformer-XL model)xlm β
XLMForSequenceClassification
(XLM model)xlm-roberta β
XLMRobertaForSequenceClassification
(XLM-RoBERTa model)xlnet β
XLNetForSequenceClassification
(XLNet model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForSequenceClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForSequenceClassification.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForSequenceClassification.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForSequenceClassification.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForMultipleChoiceΒΆ
-
class
transformers.
AutoModelForMultipleChoice
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:AlbertForMultipleChoice
(ALBERT model)BertConfig
configuration class:BertForMultipleChoice
(BERT model)BigBirdConfig
configuration class:BigBirdForMultipleChoice
(BigBird model)CamembertConfig
configuration class:CamembertForMultipleChoice
(CamemBERT model)CanineConfig
configuration class:CanineForMultipleChoice
(Canine model)ConvBertConfig
configuration class:ConvBertForMultipleChoice
(ConvBERT model)DistilBertConfig
configuration class:DistilBertForMultipleChoice
(DistilBERT model)ElectraConfig
configuration class:ElectraForMultipleChoice
(ELECTRA model)FNetConfig
configuration class:FNetForMultipleChoice
(FNet model)FlaubertConfig
configuration class:FlaubertForMultipleChoice
(FlauBERT model)FunnelConfig
configuration class:FunnelForMultipleChoice
(Funnel Transformer model)IBertConfig
configuration class:IBertForMultipleChoice
(I-BERT model)LongformerConfig
configuration class:LongformerForMultipleChoice
(Longformer model)MPNetConfig
configuration class:MPNetForMultipleChoice
(MPNet model)MegatronBertConfig
configuration class:MegatronBertForMultipleChoice
(MegatronBert model)MobileBertConfig
configuration class:MobileBertForMultipleChoice
(MobileBERT model)RemBertConfig
configuration class:RemBertForMultipleChoice
(RemBERT model)RoFormerConfig
configuration class:RoFormerForMultipleChoice
(RoFormer model)RobertaConfig
configuration class:RobertaForMultipleChoice
(RoBERTa model)SqueezeBertConfig
configuration class:SqueezeBertForMultipleChoice
(SqueezeBERT model)XLMConfig
configuration class:XLMForMultipleChoice
(XLM model)XLMRobertaConfig
configuration class:XLMRobertaForMultipleChoice
(XLM-RoBERTa model)XLNetConfig
configuration class:XLNetForMultipleChoice
(XLNet model)
Examples:
>>> from transformers import AutoConfig, AutoModelForMultipleChoice >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForMultipleChoice.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
AlbertForMultipleChoice
(ALBERT model)bert β
BertForMultipleChoice
(BERT model)big_bird β
BigBirdForMultipleChoice
(BigBird model)camembert β
CamembertForMultipleChoice
(CamemBERT model)canine β
CanineForMultipleChoice
(Canine model)convbert β
ConvBertForMultipleChoice
(ConvBERT model)distilbert β
DistilBertForMultipleChoice
(DistilBERT model)electra β
ElectraForMultipleChoice
(ELECTRA model)flaubert β
FlaubertForMultipleChoice
(FlauBERT model)fnet β
FNetForMultipleChoice
(FNet model)funnel β
FunnelForMultipleChoice
(Funnel Transformer model)ibert β
IBertForMultipleChoice
(I-BERT model)longformer β
LongformerForMultipleChoice
(Longformer model)megatron-bert β
MegatronBertForMultipleChoice
(MegatronBert model)mobilebert β
MobileBertForMultipleChoice
(MobileBERT model)mpnet β
MPNetForMultipleChoice
(MPNet model)rembert β
RemBertForMultipleChoice
(RemBERT model)roberta β
RobertaForMultipleChoice
(RoBERTa model)roformer β
RoFormerForMultipleChoice
(RoFormer model)squeezebert β
SqueezeBertForMultipleChoice
(SqueezeBERT model)xlm β
XLMForMultipleChoice
(XLM model)xlm-roberta β
XLMRobertaForMultipleChoice
(XLM-RoBERTa model)xlnet β
XLNetForMultipleChoice
(XLNet model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForMultipleChoice >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForMultipleChoice.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForMultipleChoice.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForMultipleChoice.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForNextSentencePredictionΒΆ
-
class
transformers.
AutoModelForNextSentencePrediction
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
BertConfig
configuration class:BertForNextSentencePrediction
(BERT model)FNetConfig
configuration class:FNetForNextSentencePrediction
(FNet model)MegatronBertConfig
configuration class:MegatronBertForNextSentencePrediction
(MegatronBert model)MobileBertConfig
configuration class:MobileBertForNextSentencePrediction
(MobileBERT model)
Examples:
>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForNextSentencePrediction.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:bert β
BertForNextSentencePrediction
(BERT model)fnet β
FNetForNextSentencePrediction
(FNet model)megatron-bert β
MegatronBertForNextSentencePrediction
(MegatronBert model)mobilebert β
MobileBertForNextSentencePrediction
(MobileBERT model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForNextSentencePrediction.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForNextSentencePrediction.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForNextSentencePrediction.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForTokenClassificationΒΆ
-
class
transformers.
AutoModelForTokenClassification
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:AlbertForTokenClassification
(ALBERT model)BertConfig
configuration class:BertForTokenClassification
(BERT model)BigBirdConfig
configuration class:BigBirdForTokenClassification
(BigBird model)CamembertConfig
configuration class:CamembertForTokenClassification
(CamemBERT model)CanineConfig
configuration class:CanineForTokenClassification
(Canine model)ConvBertConfig
configuration class:ConvBertForTokenClassification
(ConvBERT model)DebertaConfig
configuration class:DebertaForTokenClassification
(DeBERTa model)DebertaV2Config
configuration class:DebertaV2ForTokenClassification
(DeBERTa-v2 model)DistilBertConfig
configuration class:DistilBertForTokenClassification
(DistilBERT model)ElectraConfig
configuration class:ElectraForTokenClassification
(ELECTRA model)FNetConfig
configuration class:FNetForTokenClassification
(FNet model)FlaubertConfig
configuration class:FlaubertForTokenClassification
(FlauBERT model)FunnelConfig
configuration class:FunnelForTokenClassification
(Funnel Transformer model)GPT2Config
configuration class:GPT2ForTokenClassification
(OpenAI GPT-2 model)IBertConfig
configuration class:IBertForTokenClassification
(I-BERT model)LayoutLMConfig
configuration class:LayoutLMForTokenClassification
(LayoutLM model)LayoutLMv2Config
configuration class:LayoutLMv2ForTokenClassification
(LayoutLMv2 model)LongformerConfig
configuration class:LongformerForTokenClassification
(Longformer model)MPNetConfig
configuration class:MPNetForTokenClassification
(MPNet model)MegatronBertConfig
configuration class:MegatronBertForTokenClassification
(MegatronBert model)MobileBertConfig
configuration class:MobileBertForTokenClassification
(MobileBERT model)RemBertConfig
configuration class:RemBertForTokenClassification
(RemBERT model)RoFormerConfig
configuration class:RoFormerForTokenClassification
(RoFormer model)RobertaConfig
configuration class:RobertaForTokenClassification
(RoBERTa model)SqueezeBertConfig
configuration class:SqueezeBertForTokenClassification
(SqueezeBERT model)XLMConfig
configuration class:XLMForTokenClassification
(XLM model)XLMRobertaConfig
configuration class:XLMRobertaForTokenClassification
(XLM-RoBERTa model)XLNetConfig
configuration class:XLNetForTokenClassification
(XLNet model)
Examples:
>>> from transformers import AutoConfig, AutoModelForTokenClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForTokenClassification.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
AlbertForTokenClassification
(ALBERT model)bert β
BertForTokenClassification
(BERT model)big_bird β
BigBirdForTokenClassification
(BigBird model)camembert β
CamembertForTokenClassification
(CamemBERT model)canine β
CanineForTokenClassification
(Canine model)convbert β
ConvBertForTokenClassification
(ConvBERT model)deberta β
DebertaForTokenClassification
(DeBERTa model)deberta-v2 β
DebertaV2ForTokenClassification
(DeBERTa-v2 model)distilbert β
DistilBertForTokenClassification
(DistilBERT model)electra β
ElectraForTokenClassification
(ELECTRA model)flaubert β
FlaubertForTokenClassification
(FlauBERT model)fnet β
FNetForTokenClassification
(FNet model)funnel β
FunnelForTokenClassification
(Funnel Transformer model)gpt2 β
GPT2ForTokenClassification
(OpenAI GPT-2 model)ibert β
IBertForTokenClassification
(I-BERT model)layoutlm β
LayoutLMForTokenClassification
(LayoutLM model)layoutlmv2 β
LayoutLMv2ForTokenClassification
(LayoutLMv2 model)longformer β
LongformerForTokenClassification
(Longformer model)megatron-bert β
MegatronBertForTokenClassification
(MegatronBert model)mobilebert β
MobileBertForTokenClassification
(MobileBERT model)mpnet β
MPNetForTokenClassification
(MPNet model)rembert β
RemBertForTokenClassification
(RemBERT model)roberta β
RobertaForTokenClassification
(RoBERTa model)roformer β
RoFormerForTokenClassification
(RoFormer model)squeezebert β
SqueezeBertForTokenClassification
(SqueezeBERT model)xlm β
XLMForTokenClassification
(XLM model)xlm-roberta β
XLMRobertaForTokenClassification
(XLM-RoBERTa model)xlnet β
XLNetForTokenClassification
(XLNet model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForTokenClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForTokenClassification.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForTokenClassification.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForTokenClassification.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForQuestionAnsweringΒΆ
-
class
transformers.
AutoModelForQuestionAnswering
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:AlbertForQuestionAnswering
(ALBERT model)BartConfig
configuration class:BartForQuestionAnswering
(BART model)BertConfig
configuration class:BertForQuestionAnswering
(BERT model)BigBirdConfig
configuration class:BigBirdForQuestionAnswering
(BigBird model)BigBirdPegasusConfig
configuration class:BigBirdPegasusForQuestionAnswering
(BigBirdPegasus model)CamembertConfig
configuration class:CamembertForQuestionAnswering
(CamemBERT model)CanineConfig
configuration class:CanineForQuestionAnswering
(Canine model)ConvBertConfig
configuration class:ConvBertForQuestionAnswering
(ConvBERT model)DebertaConfig
configuration class:DebertaForQuestionAnswering
(DeBERTa model)DebertaV2Config
configuration class:DebertaV2ForQuestionAnswering
(DeBERTa-v2 model)DistilBertConfig
configuration class:DistilBertForQuestionAnswering
(DistilBERT model)ElectraConfig
configuration class:ElectraForQuestionAnswering
(ELECTRA model)FNetConfig
configuration class:FNetForQuestionAnswering
(FNet model)FlaubertConfig
configuration class:FlaubertForQuestionAnsweringSimple
(FlauBERT model)FunnelConfig
configuration class:FunnelForQuestionAnswering
(Funnel Transformer model)IBertConfig
configuration class:IBertForQuestionAnswering
(I-BERT model)LEDConfig
configuration class:LEDForQuestionAnswering
(LED model)LayoutLMv2Config
configuration class:LayoutLMv2ForQuestionAnswering
(LayoutLMv2 model)LongformerConfig
configuration class:LongformerForQuestionAnswering
(Longformer model)LxmertConfig
configuration class:LxmertForQuestionAnswering
(LXMERT model)MBartConfig
configuration class:MBartForQuestionAnswering
(mBART model)MPNetConfig
configuration class:MPNetForQuestionAnswering
(MPNet model)MegatronBertConfig
configuration class:MegatronBertForQuestionAnswering
(MegatronBert model)MobileBertConfig
configuration class:MobileBertForQuestionAnswering
(MobileBERT model)ReformerConfig
configuration class:ReformerForQuestionAnswering
(Reformer model)RemBertConfig
configuration class:RemBertForQuestionAnswering
(RemBERT model)RoFormerConfig
configuration class:RoFormerForQuestionAnswering
(RoFormer model)RobertaConfig
configuration class:RobertaForQuestionAnswering
(RoBERTa model)SplinterConfig
configuration class:SplinterForQuestionAnswering
(Splinter model)SqueezeBertConfig
configuration class:SqueezeBertForQuestionAnswering
(SqueezeBERT model)XLMConfig
configuration class:XLMForQuestionAnsweringSimple
(XLM model)XLMRobertaConfig
configuration class:XLMRobertaForQuestionAnswering
(XLM-RoBERTa model)XLNetConfig
configuration class:XLNetForQuestionAnsweringSimple
(XLNet model)
Examples:
>>> from transformers import AutoConfig, AutoModelForQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForQuestionAnswering.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
AlbertForQuestionAnswering
(ALBERT model)bart β
BartForQuestionAnswering
(BART model)bert β
BertForQuestionAnswering
(BERT model)big_bird β
BigBirdForQuestionAnswering
(BigBird model)bigbird_pegasus β
BigBirdPegasusForQuestionAnswering
(BigBirdPegasus model)camembert β
CamembertForQuestionAnswering
(CamemBERT model)canine β
CanineForQuestionAnswering
(Canine model)convbert β
ConvBertForQuestionAnswering
(ConvBERT model)deberta β
DebertaForQuestionAnswering
(DeBERTa model)deberta-v2 β
DebertaV2ForQuestionAnswering
(DeBERTa-v2 model)distilbert β
DistilBertForQuestionAnswering
(DistilBERT model)electra β
ElectraForQuestionAnswering
(ELECTRA model)flaubert β
FlaubertForQuestionAnsweringSimple
(FlauBERT model)fnet β
FNetForQuestionAnswering
(FNet model)funnel β
FunnelForQuestionAnswering
(Funnel Transformer model)ibert β
IBertForQuestionAnswering
(I-BERT model)layoutlmv2 β
LayoutLMv2ForQuestionAnswering
(LayoutLMv2 model)led β
LEDForQuestionAnswering
(LED model)longformer β
LongformerForQuestionAnswering
(Longformer model)lxmert β
LxmertForQuestionAnswering
(LXMERT model)mbart β
MBartForQuestionAnswering
(mBART model)megatron-bert β
MegatronBertForQuestionAnswering
(MegatronBert model)mobilebert β
MobileBertForQuestionAnswering
(MobileBERT model)mpnet β
MPNetForQuestionAnswering
(MPNet model)reformer β
ReformerForQuestionAnswering
(Reformer model)rembert β
RemBertForQuestionAnswering
(RemBERT model)roberta β
RobertaForQuestionAnswering
(RoBERTa model)roformer β
RoFormerForQuestionAnswering
(RoFormer model)splinter β
SplinterForQuestionAnswering
(Splinter model)squeezebert β
SqueezeBertForQuestionAnswering
(SqueezeBERT model)xlm β
XLMForQuestionAnsweringSimple
(XLM model)xlm-roberta β
XLMRobertaForQuestionAnswering
(XLM-RoBERTa model)xlnet β
XLNetForQuestionAnsweringSimple
(XLNet model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForQuestionAnswering.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForQuestionAnswering.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForQuestionAnswering.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForTableQuestionAnsweringΒΆ
-
class
transformers.
AutoModelForTableQuestionAnswering
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a table question answering head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
TapasConfig
configuration class:TapasForQuestionAnswering
(TAPAS model)
Examples:
>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('google/tapas-base-finetuned-wtq') >>> model = AutoModelForTableQuestionAnswering.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:tapas β
TapasForQuestionAnswering
(TAPAS model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForTableQuestionAnswering.from_pretrained('google/tapas-base-finetuned-wtq') >>> # Update configuration during loading >>> model = AutoModelForTableQuestionAnswering.from_pretrained('google/tapas-base-finetuned-wtq', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/tapas_tf_model_config.json') >>> model = AutoModelForTableQuestionAnswering.from_pretrained('./tf_model/tapas_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForImageClassificationΒΆ
-
class
transformers.
AutoModelForImageClassification
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a image classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
BeitConfig
configuration class:BeitForImageClassification
(BEiT model)DeiTConfig
configuration class:DeiTForImageClassification
orDeiTForImageClassificationWithTeacher
(DeiT model)SegformerConfig
configuration class:SegformerForImageClassification
(SegFormer model)ViTConfig
configuration class:ViTForImageClassification
(ViT model)
Examples:
>>> from transformers import AutoConfig, AutoModelForImageClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForImageClassification.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:beit β
BeitForImageClassification
(BEiT model)deit β
DeiTForImageClassification
orDeiTForImageClassificationWithTeacher
(DeiT model)segformer β
SegformerForImageClassification
(SegFormer model)vit β
ViTForImageClassification
(ViT model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForImageClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForImageClassification.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForImageClassification.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForImageClassification.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForAudioClassificationΒΆ
-
class
transformers.
AutoModelForAudioClassification
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a audio classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
HubertConfig
configuration class:HubertForSequenceClassification
(Hubert model)SEWConfig
configuration class:SEWForSequenceClassification
(SEW model)SEWDConfig
configuration class:SEWDForSequenceClassification
(SEW-D model)UniSpeechConfig
configuration class:UniSpeechForSequenceClassification
(UniSpeech model)UniSpeechSatConfig
configuration class:UniSpeechSatForSequenceClassification
(UniSpeechSat model)Wav2Vec2Config
configuration class:Wav2Vec2ForSequenceClassification
(Wav2Vec2 model)
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForAudioClassification.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:hubert β
HubertForSequenceClassification
(Hubert model)sew β
SEWForSequenceClassification
(SEW model)sew-d β
SEWDForSequenceClassification
(SEW-D model)unispeech β
UniSpeechForSequenceClassification
(UniSpeech model)unispeech-sat β
UniSpeechSatForSequenceClassification
(UniSpeechSat model)wav2vec2 β
Wav2Vec2ForSequenceClassification
(Wav2Vec2 model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForAudioClassification.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForAudioClassification.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForAudioClassification.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForCTCΒΆ
-
class
transformers.
AutoModelForCTC
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a connectionist temporal classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a connectionist temporal classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
HubertConfig
configuration class:HubertForCTC
(Hubert model)SEWDConfig
configuration class:SEWDForCTC
(SEW-D model)UniSpeechConfig
configuration class:UniSpeechForCTC
(UniSpeech model)UniSpeechSatConfig
configuration class:UniSpeechSatForCTC
(UniSpeechSat model)Wav2Vec2Config
configuration class:Wav2Vec2ForCTC
(Wav2Vec2 model)
Examples:
>>> from transformers import AutoConfig, AutoModelForCTC >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForCTC.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a connectionist temporal classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:hubert β
HubertForCTC
(Hubert model)sew β
SEWForCTC
(SEW model)sew-d β
SEWDForCTC
(SEW-D model)unispeech β
UniSpeechForCTC
(UniSpeech model)unispeech-sat β
UniSpeechSatForCTC
(UniSpeechSat model)wav2vec2 β
Wav2Vec2ForCTC
(Wav2Vec2 model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForCTC >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForCTC.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForCTC.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForCTC.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForSpeechSeq2SeqΒΆ
-
class
transformers.
AutoModelForSpeechSeq2Seq
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeing head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeing head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
Speech2TextConfig
configuration class:Speech2TextForConditionalGeneration
(Speech2Text model)SpeechEncoderDecoderConfig
configuration class:SpeechEncoderDecoderModel
(Speech Encoder decoder model)
Examples:
>>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForSpeechSeq2Seq.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeing head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:speech-encoder-decoder β
SpeechEncoderDecoderModel
(Speech Encoder decoder model)speech_to_text β
Speech2TextForConditionalGeneration
(Speech2Text model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForSpeechSeq2Seq.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForSpeechSeq2Seq.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForSpeechSeq2Seq.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForObjectDetectionΒΆ
-
class
transformers.
AutoModelForObjectDetection
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a object detection head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a object detection head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
DetrConfig
configuration class:DetrForObjectDetection
(DETR model)
Examples:
>>> from transformers import AutoConfig, AutoModelForObjectDetection >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForObjectDetection.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a object detection head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:detr β
DetrForObjectDetection
(DETR model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForObjectDetection >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForObjectDetection.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForObjectDetection.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForObjectDetection.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
AutoModelForImageSegmentationΒΆ
-
class
transformers.
AutoModelForImageSegmentation
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a image segmentation head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a image segmentation head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
DetrConfig
configuration class:DetrForSegmentation
(DETR model)
Examples:
>>> from transformers import AutoConfig, AutoModelForImageSegmentation >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = AutoModelForImageSegmentation.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a image segmentation head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:detr β
DetrForSegmentation
(DETR model)
The model is set in evaluation mode by default using
model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) β
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) β Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, AutoModelForImageSegmentation >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForImageSegmentation.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = AutoModelForImageSegmentation.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json') >>> model = AutoModelForImageSegmentation.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
-
classmethod
TFAutoModelΒΆ
-
class
transformers.
TFAutoModel
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the base model classes of the library when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the base model classes of the library from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:TFAlbertModel
(ALBERT model)BartConfig
configuration class:TFBartModel
(BART model)BertConfig
configuration class:TFBertModel
(BERT model)BlenderbotConfig
configuration class:TFBlenderbotModel
(Blenderbot model)BlenderbotSmallConfig
configuration class:TFBlenderbotSmallModel
(BlenderbotSmall model)CTRLConfig
configuration class:TFCTRLModel
(CTRL model)CamembertConfig
configuration class:TFCamembertModel
(CamemBERT model)ConvBertConfig
configuration class:TFConvBertModel
(ConvBERT model)DPRConfig
configuration class:TFDPRQuestionEncoder
(DPR model)DebertaConfig
configuration class:TFDebertaModel
(DeBERTa model)DebertaV2Config
configuration class:TFDebertaV2Model
(DeBERTa-v2 model)DistilBertConfig
configuration class:TFDistilBertModel
(DistilBERT model)ElectraConfig
configuration class:TFElectraModel
(ELECTRA model)FlaubertConfig
configuration class:TFFlaubertModel
(FlauBERT model)FunnelConfig
configuration class:TFFunnelModel
orTFFunnelBaseModel
(Funnel Transformer model)GPT2Config
configuration class:TFGPT2Model
(OpenAI GPT-2 model)HubertConfig
configuration class:TFHubertModel
(Hubert model)LEDConfig
configuration class:TFLEDModel
(LED model)LayoutLMConfig
configuration class:TFLayoutLMModel
(LayoutLM model)LongformerConfig
configuration class:TFLongformerModel
(Longformer model)LxmertConfig
configuration class:TFLxmertModel
(LXMERT model)MBartConfig
configuration class:TFMBartModel
(mBART model)MPNetConfig
configuration class:TFMPNetModel
(MPNet model)MT5Config
configuration class:TFMT5Model
(mT5 model)MarianConfig
configuration class:TFMarianModel
(Marian model)MobileBertConfig
configuration class:TFMobileBertModel
(MobileBERT model)OpenAIGPTConfig
configuration class:TFOpenAIGPTModel
(OpenAI GPT model)PegasusConfig
configuration class:TFPegasusModel
(Pegasus model)RemBertConfig
configuration class:TFRemBertModel
(RemBERT model)RoFormerConfig
configuration class:TFRoFormerModel
(RoFormer model)RobertaConfig
configuration class:TFRobertaModel
(RoBERTa model)TransfoXLConfig
configuration class:TFTransfoXLModel
(Transformer-XL model)Wav2Vec2Config
configuration class:TFWav2Vec2Model
(Wav2Vec2 model)XLMConfig
configuration class:TFXLMModel
(XLM model)XLMRobertaConfig
configuration class:TFXLMRobertaModel
(XLM-RoBERTa model)XLNetConfig
configuration class:TFXLNetModel
(XLNet model)
Examples:
>>> from transformers import AutoConfig, TFAutoModel >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = TFAutoModel.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
TFAlbertModel
(ALBERT model)bart β
TFBartModel
(BART model)bert β
TFBertModel
(BERT model)blenderbot β
TFBlenderbotModel
(Blenderbot model)blenderbot-small β
TFBlenderbotSmallModel
(BlenderbotSmall model)camembert β
TFCamembertModel
(CamemBERT model)convbert β
TFConvBertModel
(ConvBERT model)ctrl β
TFCTRLModel
(CTRL model)deberta β
TFDebertaModel
(DeBERTa model)deberta-v2 β
TFDebertaV2Model
(DeBERTa-v2 model)distilbert β
TFDistilBertModel
(DistilBERT model)dpr β
TFDPRQuestionEncoder
(DPR model)electra β
TFElectraModel
(ELECTRA model)flaubert β
TFFlaubertModel
(FlauBERT model)funnel β
TFFunnelModel
orTFFunnelBaseModel
(Funnel Transformer model)gpt2 β
TFGPT2Model
(OpenAI GPT-2 model)hubert β
TFHubertModel
(Hubert model)layoutlm β
TFLayoutLMModel
(LayoutLM model)led β
TFLEDModel
(LED model)longformer β
TFLongformerModel
(Longformer model)lxmert β
TFLxmertModel
(LXMERT model)marian β
TFMarianModel
(Marian model)mbart β
TFMBartModel
(mBART model)mobilebert β
TFMobileBertModel
(MobileBERT model)mpnet β
TFMPNetModel
(MPNet model)mt5 β
TFMT5Model
(mT5 model)openai-gpt β
TFOpenAIGPTModel
(OpenAI GPT model)pegasus β
TFPegasusModel
(Pegasus model)rembert β
TFRemBertModel
(RemBERT model)roberta β
TFRobertaModel
(RoBERTa model)roformer β
TFRoFormerModel
(RoFormer model)t5 β
TFT5Model
(T5 model)transfo-xl β
TFTransfoXLModel
(Transformer-XL model)wav2vec2 β
TFWav2Vec2Model
(Wav2Vec2 model)xlm β
TFXLMModel
(XLM model)xlm-roberta β
TFXLMRobertaModel
(XLM-RoBERTa model)xlnet β
TFXLNetModel
(XLNet model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, TFAutoModel >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModel.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = TFAutoModel.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = TFAutoModel.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
TFAutoModelForPreTrainingΒΆ
-
class
transformers.
TFAutoModelForPreTraining
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:TFAlbertForPreTraining
(ALBERT model)BartConfig
configuration class:TFBartForConditionalGeneration
(BART model)BertConfig
configuration class:TFBertForPreTraining
(BERT model)CTRLConfig
configuration class:TFCTRLLMHeadModel
(CTRL model)CamembertConfig
configuration class:TFCamembertForMaskedLM
(CamemBERT model)DistilBertConfig
configuration class:TFDistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class:TFElectraForPreTraining
(ELECTRA model)FlaubertConfig
configuration class:TFFlaubertWithLMHeadModel
(FlauBERT model)FunnelConfig
configuration class:TFFunnelForPreTraining
(Funnel Transformer model)GPT2Config
configuration class:TFGPT2LMHeadModel
(OpenAI GPT-2 model)LayoutLMConfig
configuration class:TFLayoutLMForMaskedLM
(LayoutLM model)LxmertConfig
configuration class:TFLxmertForPreTraining
(LXMERT model)MPNetConfig
configuration class:TFMPNetForMaskedLM
(MPNet model)MobileBertConfig
configuration class:TFMobileBertForPreTraining
(MobileBERT model)OpenAIGPTConfig
configuration class:TFOpenAIGPTLMHeadModel
(OpenAI GPT model)RobertaConfig
configuration class:TFRobertaForMaskedLM
(RoBERTa model)T5Config
configuration class:TFT5ForConditionalGeneration
(T5 model)TransfoXLConfig
configuration class:TFTransfoXLLMHeadModel
(Transformer-XL model)XLMConfig
configuration class:TFXLMWithLMHeadModel
(XLM model)XLMRobertaConfig
configuration class:TFXLMRobertaForMaskedLM
(XLM-RoBERTa model)XLNetConfig
configuration class:TFXLNetLMHeadModel
(XLNet model)
Examples:
>>> from transformers import AutoConfig, TFAutoModelForPreTraining >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = TFAutoModelForPreTraining.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
TFAlbertForPreTraining
(ALBERT model)bart β
TFBartForConditionalGeneration
(BART model)bert β
TFBertForPreTraining
(BERT model)camembert β
TFCamembertForMaskedLM
(CamemBERT model)ctrl β
TFCTRLLMHeadModel
(CTRL model)distilbert β
TFDistilBertForMaskedLM
(DistilBERT model)electra β
TFElectraForPreTraining
(ELECTRA model)flaubert β
TFFlaubertWithLMHeadModel
(FlauBERT model)funnel β
TFFunnelForPreTraining
(Funnel Transformer model)gpt2 β
TFGPT2LMHeadModel
(OpenAI GPT-2 model)layoutlm β
TFLayoutLMForMaskedLM
(LayoutLM model)lxmert β
TFLxmertForPreTraining
(LXMERT model)mobilebert β
TFMobileBertForPreTraining
(MobileBERT model)mpnet β
TFMPNetForMaskedLM
(MPNet model)openai-gpt β
TFOpenAIGPTLMHeadModel
(OpenAI GPT model)roberta β
TFRobertaForMaskedLM
(RoBERTa model)t5 β
TFT5ForConditionalGeneration
(T5 model)transfo-xl β
TFTransfoXLLMHeadModel
(Transformer-XL model)xlm β
TFXLMWithLMHeadModel
(XLM model)xlm-roberta β
TFXLMRobertaForMaskedLM
(XLM-RoBERTa model)xlnet β
TFXLNetLMHeadModel
(XLNet model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForPreTraining >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForPreTraining.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = TFAutoModelForPreTraining.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = TFAutoModelForPreTraining.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
TFAutoModelForCausalLMΒΆ
-
class
transformers.
TFAutoModelForCausalLM
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
BertConfig
configuration class:TFBertLMHeadModel
(BERT model)CTRLConfig
configuration class:TFCTRLLMHeadModel
(CTRL model)GPT2Config
configuration class:TFGPT2LMHeadModel
(OpenAI GPT-2 model)OpenAIGPTConfig
configuration class:TFOpenAIGPTLMHeadModel
(OpenAI GPT model)RemBertConfig
configuration class:TFRemBertForCausalLM
(RemBERT model)RoFormerConfig
configuration class:TFRoFormerForCausalLM
(RoFormer model)RobertaConfig
configuration class:TFRobertaForCausalLM
(RoBERTa model)TransfoXLConfig
configuration class:TFTransfoXLLMHeadModel
(Transformer-XL model)XLMConfig
configuration class:TFXLMWithLMHeadModel
(XLM model)XLNetConfig
configuration class:TFXLNetLMHeadModel
(XLNet model)
Examples:
>>> from transformers import AutoConfig, TFAutoModelForCausalLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = TFAutoModelForCausalLM.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:bert β
TFBertLMHeadModel
(BERT model)ctrl β
TFCTRLLMHeadModel
(CTRL model)gpt2 β
TFGPT2LMHeadModel
(OpenAI GPT-2 model)openai-gpt β
TFOpenAIGPTLMHeadModel
(OpenAI GPT model)rembert β
TFRemBertForCausalLM
(RemBERT model)roberta β
TFRobertaForCausalLM
(RoBERTa model)roformer β
TFRoFormerForCausalLM
(RoFormer model)transfo-xl β
TFTransfoXLLMHeadModel
(Transformer-XL model)xlm β
TFXLMWithLMHeadModel
(XLM model)xlnet β
TFXLNetLMHeadModel
(XLNet model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForCausalLM >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForCausalLM.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = TFAutoModelForCausalLM.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = TFAutoModelForCausalLM.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
TFAutoModelForMaskedLMΒΆ
-
class
transformers.
TFAutoModelForMaskedLM
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:TFAlbertForMaskedLM
(ALBERT model)BertConfig
configuration class:TFBertForMaskedLM
(BERT model)CamembertConfig
configuration class:TFCamembertForMaskedLM
(CamemBERT model)ConvBertConfig
configuration class:TFConvBertForMaskedLM
(ConvBERT model)DebertaConfig
configuration class:TFDebertaForMaskedLM
(DeBERTa model)DebertaV2Config
configuration class:TFDebertaV2ForMaskedLM
(DeBERTa-v2 model)DistilBertConfig
configuration class:TFDistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class:TFElectraForMaskedLM
(ELECTRA model)FlaubertConfig
configuration class:TFFlaubertWithLMHeadModel
(FlauBERT model)FunnelConfig
configuration class:TFFunnelForMaskedLM
(Funnel Transformer model)LayoutLMConfig
configuration class:TFLayoutLMForMaskedLM
(LayoutLM model)LongformerConfig
configuration class:TFLongformerForMaskedLM
(Longformer model)MPNetConfig
configuration class:TFMPNetForMaskedLM
(MPNet model)MobileBertConfig
configuration class:TFMobileBertForMaskedLM
(MobileBERT model)RemBertConfig
configuration class:TFRemBertForMaskedLM
(RemBERT model)RoFormerConfig
configuration class:TFRoFormerForMaskedLM
(RoFormer model)RobertaConfig
configuration class:TFRobertaForMaskedLM
(RoBERTa model)XLMConfig
configuration class:TFXLMWithLMHeadModel
(XLM model)XLMRobertaConfig
configuration class:TFXLMRobertaForMaskedLM
(XLM-RoBERTa model)
Examples:
>>> from transformers import AutoConfig, TFAutoModelForMaskedLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = TFAutoModelForMaskedLM.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
TFAlbertForMaskedLM
(ALBERT model)bert β
TFBertForMaskedLM
(BERT model)camembert β
TFCamembertForMaskedLM
(CamemBERT model)convbert β
TFConvBertForMaskedLM
(ConvBERT model)deberta β
TFDebertaForMaskedLM
(DeBERTa model)deberta-v2 β
TFDebertaV2ForMaskedLM
(DeBERTa-v2 model)distilbert β
TFDistilBertForMaskedLM
(DistilBERT model)electra β
TFElectraForMaskedLM
(ELECTRA model)flaubert β
TFFlaubertWithLMHeadModel
(FlauBERT model)funnel β
TFFunnelForMaskedLM
(Funnel Transformer model)layoutlm β
TFLayoutLMForMaskedLM
(LayoutLM model)longformer β
TFLongformerForMaskedLM
(Longformer model)mobilebert β
TFMobileBertForMaskedLM
(MobileBERT model)mpnet β
TFMPNetForMaskedLM
(MPNet model)rembert β
TFRemBertForMaskedLM
(RemBERT model)roberta β
TFRobertaForMaskedLM
(RoBERTa model)roformer β
TFRoFormerForMaskedLM
(RoFormer model)xlm β
TFXLMWithLMHeadModel
(XLM model)xlm-roberta β
TFXLMRobertaForMaskedLM
(XLM-RoBERTa model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForMaskedLM >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForMaskedLM.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = TFAutoModelForMaskedLM.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = TFAutoModelForMaskedLM.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
TFAutoModelForSeq2SeqLMΒΆ
-
class
transformers.
TFAutoModelForSeq2SeqLM
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
BartConfig
configuration class:TFBartForConditionalGeneration
(BART model)BlenderbotConfig
configuration class:TFBlenderbotForConditionalGeneration
(Blenderbot model)BlenderbotSmallConfig
configuration class:TFBlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)EncoderDecoderConfig
configuration class:TFEncoderDecoderModel
(Encoder decoder model)LEDConfig
configuration class:TFLEDForConditionalGeneration
(LED model)MBartConfig
configuration class:TFMBartForConditionalGeneration
(mBART model)MT5Config
configuration class:TFMT5ForConditionalGeneration
(mT5 model)MarianConfig
configuration class:TFMarianMTModel
(Marian model)PegasusConfig
configuration class:TFPegasusForConditionalGeneration
(Pegasus model)T5Config
configuration class:TFT5ForConditionalGeneration
(T5 model)
Examples:
>>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('t5-base') >>> model = TFAutoModelForSeq2SeqLM.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:bart β
TFBartForConditionalGeneration
(BART model)blenderbot β
TFBlenderbotForConditionalGeneration
(Blenderbot model)blenderbot-small β
TFBlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)encoder-decoder β
TFEncoderDecoderModel
(Encoder decoder model)led β
TFLEDForConditionalGeneration
(LED model)marian β
TFMarianMTModel
(Marian model)mbart β
TFMBartForConditionalGeneration
(mBART model)mt5 β
TFMT5ForConditionalGeneration
(mT5 model)pegasus β
TFPegasusForConditionalGeneration
(Pegasus model)t5 β
TFT5ForConditionalGeneration
(T5 model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForSeq2SeqLM.from_pretrained('t5-base') >>> # Update configuration during loading >>> model = TFAutoModelForSeq2SeqLM.from_pretrained('t5-base', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/t5_pt_model_config.json') >>> model = TFAutoModelForSeq2SeqLM.from_pretrained('./pt_model/t5_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
TFAutoModelForSequenceClassificationΒΆ
-
class
transformers.
TFAutoModelForSequenceClassification
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:TFAlbertForSequenceClassification
(ALBERT model)BertConfig
configuration class:TFBertForSequenceClassification
(BERT model)CTRLConfig
configuration class:TFCTRLForSequenceClassification
(CTRL model)CamembertConfig
configuration class:TFCamembertForSequenceClassification
(CamemBERT model)ConvBertConfig
configuration class:TFConvBertForSequenceClassification
(ConvBERT model)DebertaConfig
configuration class:TFDebertaForSequenceClassification
(DeBERTa model)DebertaV2Config
configuration class:TFDebertaV2ForSequenceClassification
(DeBERTa-v2 model)DistilBertConfig
configuration class:TFDistilBertForSequenceClassification
(DistilBERT model)ElectraConfig
configuration class:TFElectraForSequenceClassification
(ELECTRA model)FlaubertConfig
configuration class:TFFlaubertForSequenceClassification
(FlauBERT model)FunnelConfig
configuration class:TFFunnelForSequenceClassification
(Funnel Transformer model)GPT2Config
configuration class:TFGPT2ForSequenceClassification
(OpenAI GPT-2 model)LayoutLMConfig
configuration class:TFLayoutLMForSequenceClassification
(LayoutLM model)LongformerConfig
configuration class:TFLongformerForSequenceClassification
(Longformer model)MPNetConfig
configuration class:TFMPNetForSequenceClassification
(MPNet model)MobileBertConfig
configuration class:TFMobileBertForSequenceClassification
(MobileBERT model)OpenAIGPTConfig
configuration class:TFOpenAIGPTForSequenceClassification
(OpenAI GPT model)RemBertConfig
configuration class:TFRemBertForSequenceClassification
(RemBERT model)RoFormerConfig
configuration class:TFRoFormerForSequenceClassification
(RoFormer model)RobertaConfig
configuration class:TFRobertaForSequenceClassification
(RoBERTa model)TransfoXLConfig
configuration class:TFTransfoXLForSequenceClassification
(Transformer-XL model)XLMConfig
configuration class:TFXLMForSequenceClassification
(XLM model)XLMRobertaConfig
configuration class:TFXLMRobertaForSequenceClassification
(XLM-RoBERTa model)XLNetConfig
configuration class:TFXLNetForSequenceClassification
(XLNet model)
Examples:
>>> from transformers import AutoConfig, TFAutoModelForSequenceClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = TFAutoModelForSequenceClassification.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
TFAlbertForSequenceClassification
(ALBERT model)bert β
TFBertForSequenceClassification
(BERT model)camembert β
TFCamembertForSequenceClassification
(CamemBERT model)convbert β
TFConvBertForSequenceClassification
(ConvBERT model)ctrl β
TFCTRLForSequenceClassification
(CTRL model)deberta β
TFDebertaForSequenceClassification
(DeBERTa model)deberta-v2 β
TFDebertaV2ForSequenceClassification
(DeBERTa-v2 model)distilbert β
TFDistilBertForSequenceClassification
(DistilBERT model)electra β
TFElectraForSequenceClassification
(ELECTRA model)flaubert β
TFFlaubertForSequenceClassification
(FlauBERT model)funnel β
TFFunnelForSequenceClassification
(Funnel Transformer model)gpt2 β
TFGPT2ForSequenceClassification
(OpenAI GPT-2 model)layoutlm β
TFLayoutLMForSequenceClassification
(LayoutLM model)longformer β
TFLongformerForSequenceClassification
(Longformer model)mobilebert β
TFMobileBertForSequenceClassification
(MobileBERT model)mpnet β
TFMPNetForSequenceClassification
(MPNet model)openai-gpt β
TFOpenAIGPTForSequenceClassification
(OpenAI GPT model)rembert β
TFRemBertForSequenceClassification
(RemBERT model)roberta β
TFRobertaForSequenceClassification
(RoBERTa model)roformer β
TFRoFormerForSequenceClassification
(RoFormer model)transfo-xl β
TFTransfoXLForSequenceClassification
(Transformer-XL model)xlm β
TFXLMForSequenceClassification
(XLM model)xlm-roberta β
TFXLMRobertaForSequenceClassification
(XLM-RoBERTa model)xlnet β
TFXLNetForSequenceClassification
(XLNet model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForSequenceClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForSequenceClassification.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = TFAutoModelForSequenceClassification.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = TFAutoModelForSequenceClassification.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
TFAutoModelForMultipleChoiceΒΆ
-
class
transformers.
TFAutoModelForMultipleChoice
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:TFAlbertForMultipleChoice
(ALBERT model)BertConfig
configuration class:TFBertForMultipleChoice
(BERT model)CamembertConfig
configuration class:TFCamembertForMultipleChoice
(CamemBERT model)ConvBertConfig
configuration class:TFConvBertForMultipleChoice
(ConvBERT model)DistilBertConfig
configuration class:TFDistilBertForMultipleChoice
(DistilBERT model)ElectraConfig
configuration class:TFElectraForMultipleChoice
(ELECTRA model)FlaubertConfig
configuration class:TFFlaubertForMultipleChoice
(FlauBERT model)FunnelConfig
configuration class:TFFunnelForMultipleChoice
(Funnel Transformer model)LongformerConfig
configuration class:TFLongformerForMultipleChoice
(Longformer model)MPNetConfig
configuration class:TFMPNetForMultipleChoice
(MPNet model)MobileBertConfig
configuration class:TFMobileBertForMultipleChoice
(MobileBERT model)RemBertConfig
configuration class:TFRemBertForMultipleChoice
(RemBERT model)RoFormerConfig
configuration class:TFRoFormerForMultipleChoice
(RoFormer model)RobertaConfig
configuration class:TFRobertaForMultipleChoice
(RoBERTa model)XLMConfig
configuration class:TFXLMForMultipleChoice
(XLM model)XLMRobertaConfig
configuration class:TFXLMRobertaForMultipleChoice
(XLM-RoBERTa model)XLNetConfig
configuration class:TFXLNetForMultipleChoice
(XLNet model)
Examples:
>>> from transformers import AutoConfig, TFAutoModelForMultipleChoice >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = TFAutoModelForMultipleChoice.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
TFAlbertForMultipleChoice
(ALBERT model)bert β
TFBertForMultipleChoice
(BERT model)camembert β
TFCamembertForMultipleChoice
(CamemBERT model)convbert β
TFConvBertForMultipleChoice
(ConvBERT model)distilbert β
TFDistilBertForMultipleChoice
(DistilBERT model)electra β
TFElectraForMultipleChoice
(ELECTRA model)flaubert β
TFFlaubertForMultipleChoice
(FlauBERT model)funnel β
TFFunnelForMultipleChoice
(Funnel Transformer model)longformer β
TFLongformerForMultipleChoice
(Longformer model)mobilebert β
TFMobileBertForMultipleChoice
(MobileBERT model)mpnet β
TFMPNetForMultipleChoice
(MPNet model)rembert β
TFRemBertForMultipleChoice
(RemBERT model)roberta β
TFRobertaForMultipleChoice
(RoBERTa model)roformer β
TFRoFormerForMultipleChoice
(RoFormer model)xlm β
TFXLMForMultipleChoice
(XLM model)xlm-roberta β
TFXLMRobertaForMultipleChoice
(XLM-RoBERTa model)xlnet β
TFXLNetForMultipleChoice
(XLNet model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForMultipleChoice >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForMultipleChoice.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = TFAutoModelForMultipleChoice.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = TFAutoModelForMultipleChoice.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
TFAutoModelForTokenClassificationΒΆ
-
class
transformers.
TFAutoModelForTokenClassification
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:TFAlbertForTokenClassification
(ALBERT model)BertConfig
configuration class:TFBertForTokenClassification
(BERT model)CamembertConfig
configuration class:TFCamembertForTokenClassification
(CamemBERT model)ConvBertConfig
configuration class:TFConvBertForTokenClassification
(ConvBERT model)DebertaConfig
configuration class:TFDebertaForTokenClassification
(DeBERTa model)DebertaV2Config
configuration class:TFDebertaV2ForTokenClassification
(DeBERTa-v2 model)DistilBertConfig
configuration class:TFDistilBertForTokenClassification
(DistilBERT model)ElectraConfig
configuration class:TFElectraForTokenClassification
(ELECTRA model)FlaubertConfig
configuration class:TFFlaubertForTokenClassification
(FlauBERT model)FunnelConfig
configuration class:TFFunnelForTokenClassification
(Funnel Transformer model)LayoutLMConfig
configuration class:TFLayoutLMForTokenClassification
(LayoutLM model)LongformerConfig
configuration class:TFLongformerForTokenClassification
(Longformer model)MPNetConfig
configuration class:TFMPNetForTokenClassification
(MPNet model)MobileBertConfig
configuration class:TFMobileBertForTokenClassification
(MobileBERT model)RemBertConfig
configuration class:TFRemBertForTokenClassification
(RemBERT model)RoFormerConfig
configuration class:TFRoFormerForTokenClassification
(RoFormer model)RobertaConfig
configuration class:TFRobertaForTokenClassification
(RoBERTa model)XLMConfig
configuration class:TFXLMForTokenClassification
(XLM model)XLMRobertaConfig
configuration class:TFXLMRobertaForTokenClassification
(XLM-RoBERTa model)XLNetConfig
configuration class:TFXLNetForTokenClassification
(XLNet model)
Examples:
>>> from transformers import AutoConfig, TFAutoModelForTokenClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = TFAutoModelForTokenClassification.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
TFAlbertForTokenClassification
(ALBERT model)bert β
TFBertForTokenClassification
(BERT model)camembert β
TFCamembertForTokenClassification
(CamemBERT model)convbert β
TFConvBertForTokenClassification
(ConvBERT model)deberta β
TFDebertaForTokenClassification
(DeBERTa model)deberta-v2 β
TFDebertaV2ForTokenClassification
(DeBERTa-v2 model)distilbert β
TFDistilBertForTokenClassification
(DistilBERT model)electra β
TFElectraForTokenClassification
(ELECTRA model)flaubert β
TFFlaubertForTokenClassification
(FlauBERT model)funnel β
TFFunnelForTokenClassification
(Funnel Transformer model)layoutlm β
TFLayoutLMForTokenClassification
(LayoutLM model)longformer β
TFLongformerForTokenClassification
(Longformer model)mobilebert β
TFMobileBertForTokenClassification
(MobileBERT model)mpnet β
TFMPNetForTokenClassification
(MPNet model)rembert β
TFRemBertForTokenClassification
(RemBERT model)roberta β
TFRobertaForTokenClassification
(RoBERTa model)roformer β
TFRoFormerForTokenClassification
(RoFormer model)xlm β
TFXLMForTokenClassification
(XLM model)xlm-roberta β
TFXLMRobertaForTokenClassification
(XLM-RoBERTa model)xlnet β
TFXLNetForTokenClassification
(XLNet model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForTokenClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForTokenClassification.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = TFAutoModelForTokenClassification.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = TFAutoModelForTokenClassification.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
TFAutoModelForQuestionAnsweringΒΆ
-
class
transformers.
TFAutoModelForQuestionAnswering
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:TFAlbertForQuestionAnswering
(ALBERT model)BertConfig
configuration class:TFBertForQuestionAnswering
(BERT model)CamembertConfig
configuration class:TFCamembertForQuestionAnswering
(CamemBERT model)ConvBertConfig
configuration class:TFConvBertForQuestionAnswering
(ConvBERT model)DebertaConfig
configuration class:TFDebertaForQuestionAnswering
(DeBERTa model)DebertaV2Config
configuration class:TFDebertaV2ForQuestionAnswering
(DeBERTa-v2 model)DistilBertConfig
configuration class:TFDistilBertForQuestionAnswering
(DistilBERT model)ElectraConfig
configuration class:TFElectraForQuestionAnswering
(ELECTRA model)FlaubertConfig
configuration class:TFFlaubertForQuestionAnsweringSimple
(FlauBERT model)FunnelConfig
configuration class:TFFunnelForQuestionAnswering
(Funnel Transformer model)LongformerConfig
configuration class:TFLongformerForQuestionAnswering
(Longformer model)MPNetConfig
configuration class:TFMPNetForQuestionAnswering
(MPNet model)MobileBertConfig
configuration class:TFMobileBertForQuestionAnswering
(MobileBERT model)RemBertConfig
configuration class:TFRemBertForQuestionAnswering
(RemBERT model)RoFormerConfig
configuration class:TFRoFormerForQuestionAnswering
(RoFormer model)RobertaConfig
configuration class:TFRobertaForQuestionAnswering
(RoBERTa model)XLMConfig
configuration class:TFXLMForQuestionAnsweringSimple
(XLM model)XLMRobertaConfig
configuration class:TFXLMRobertaForQuestionAnswering
(XLM-RoBERTa model)XLNetConfig
configuration class:TFXLNetForQuestionAnsweringSimple
(XLNet model)
Examples:
>>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = TFAutoModelForQuestionAnswering.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
TFAlbertForQuestionAnswering
(ALBERT model)bert β
TFBertForQuestionAnswering
(BERT model)camembert β
TFCamembertForQuestionAnswering
(CamemBERT model)convbert β
TFConvBertForQuestionAnswering
(ConvBERT model)deberta β
TFDebertaForQuestionAnswering
(DeBERTa model)deberta-v2 β
TFDebertaV2ForQuestionAnswering
(DeBERTa-v2 model)distilbert β
TFDistilBertForQuestionAnswering
(DistilBERT model)electra β
TFElectraForQuestionAnswering
(ELECTRA model)flaubert β
TFFlaubertForQuestionAnsweringSimple
(FlauBERT model)funnel β
TFFunnelForQuestionAnswering
(Funnel Transformer model)longformer β
TFLongformerForQuestionAnswering
(Longformer model)mobilebert β
TFMobileBertForQuestionAnswering
(MobileBERT model)mpnet β
TFMPNetForQuestionAnswering
(MPNet model)rembert β
TFRemBertForQuestionAnswering
(RemBERT model)roberta β
TFRobertaForQuestionAnswering
(RoBERTa model)roformer β
TFRoFormerForQuestionAnswering
(RoFormer model)xlm β
TFXLMForQuestionAnsweringSimple
(XLM model)xlm-roberta β
TFXLMRobertaForQuestionAnswering
(XLM-RoBERTa model)xlnet β
TFXLNetForQuestionAnsweringSimple
(XLNet model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForQuestionAnswering.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = TFAutoModelForQuestionAnswering.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = TFAutoModelForQuestionAnswering.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelΒΆ
-
class
transformers.
FlaxAutoModel
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the base model classes of the library when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the base model classes of the library from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:FlaxAlbertModel
(ALBERT model)BartConfig
configuration class:FlaxBartModel
(BART model)BeitConfig
configuration class:FlaxBeitModel
(BEiT model)BertConfig
configuration class:FlaxBertModel
(BERT model)BigBirdConfig
configuration class:FlaxBigBirdModel
(BigBird model)CLIPConfig
configuration class:FlaxCLIPModel
(CLIP model)DistilBertConfig
configuration class:FlaxDistilBertModel
(DistilBERT model)ElectraConfig
configuration class:FlaxElectraModel
(ELECTRA model)GPT2Config
configuration class:FlaxGPT2Model
(OpenAI GPT-2 model)GPTNeoConfig
configuration class:FlaxGPTNeoModel
(GPT Neo model)MBartConfig
configuration class:FlaxMBartModel
(mBART model)MT5Config
configuration class:FlaxMT5Model
(mT5 model)MarianConfig
configuration class:FlaxMarianModel
(Marian model)PegasusConfig
configuration class:FlaxPegasusModel
(Pegasus model)RobertaConfig
configuration class:FlaxRobertaModel
(RoBERTa model)T5Config
configuration class:FlaxT5Model
(T5 model)ViTConfig
configuration class:FlaxViTModel
(ViT model)Wav2Vec2Config
configuration class:FlaxWav2Vec2Model
(Wav2Vec2 model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModel >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModel.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
FlaxAlbertModel
(ALBERT model)bart β
FlaxBartModel
(BART model)beit β
FlaxBeitModel
(BEiT model)bert β
FlaxBertModel
(BERT model)big_bird β
FlaxBigBirdModel
(BigBird model)clip β
FlaxCLIPModel
(CLIP model)distilbert β
FlaxDistilBertModel
(DistilBERT model)electra β
FlaxElectraModel
(ELECTRA model)gpt2 β
FlaxGPT2Model
(OpenAI GPT-2 model)gpt_neo β
FlaxGPTNeoModel
(GPT Neo model)marian β
FlaxMarianModel
(Marian model)mbart β
FlaxMBartModel
(mBART model)mt5 β
FlaxMT5Model
(mT5 model)pegasus β
FlaxPegasusModel
(Pegasus model)roberta β
FlaxRobertaModel
(RoBERTa model)t5 β
FlaxT5Model
(T5 model)vit β
FlaxViTModel
(ViT model)wav2vec2 β
FlaxWav2Vec2Model
(Wav2Vec2 model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModel >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModel.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModel.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModel.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForCausalLMΒΆ
-
class
transformers.
FlaxAutoModelForCausalLM
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
GPT2Config
configuration class:FlaxGPT2LMHeadModel
(OpenAI GPT-2 model)GPTNeoConfig
configuration class:FlaxGPTNeoForCausalLM
(GPT Neo model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForCausalLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModelForCausalLM.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:gpt2 β
FlaxGPT2LMHeadModel
(OpenAI GPT-2 model)gpt_neo β
FlaxGPTNeoForCausalLM
(GPT Neo model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForCausalLM >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForCausalLM.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModelForCausalLM.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModelForCausalLM.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForPreTrainingΒΆ
-
class
transformers.
FlaxAutoModelForPreTraining
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:FlaxAlbertForPreTraining
(ALBERT model)BartConfig
configuration class:FlaxBartForConditionalGeneration
(BART model)BertConfig
configuration class:FlaxBertForPreTraining
(BERT model)BigBirdConfig
configuration class:FlaxBigBirdForPreTraining
(BigBird model)ElectraConfig
configuration class:FlaxElectraForPreTraining
(ELECTRA model)MBartConfig
configuration class:FlaxMBartForConditionalGeneration
(mBART model)MT5Config
configuration class:FlaxMT5ForConditionalGeneration
(mT5 model)RobertaConfig
configuration class:FlaxRobertaForMaskedLM
(RoBERTa model)T5Config
configuration class:FlaxT5ForConditionalGeneration
(T5 model)Wav2Vec2Config
configuration class:FlaxWav2Vec2ForPreTraining
(Wav2Vec2 model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForPreTraining >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModelForPreTraining.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
FlaxAlbertForPreTraining
(ALBERT model)bart β
FlaxBartForConditionalGeneration
(BART model)bert β
FlaxBertForPreTraining
(BERT model)big_bird β
FlaxBigBirdForPreTraining
(BigBird model)electra β
FlaxElectraForPreTraining
(ELECTRA model)mbart β
FlaxMBartForConditionalGeneration
(mBART model)mt5 β
FlaxMT5ForConditionalGeneration
(mT5 model)roberta β
FlaxRobertaForMaskedLM
(RoBERTa model)t5 β
FlaxT5ForConditionalGeneration
(T5 model)wav2vec2 β
FlaxWav2Vec2ForPreTraining
(Wav2Vec2 model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForPreTraining >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForPreTraining.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModelForPreTraining.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModelForPreTraining.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForMaskedLMΒΆ
-
class
transformers.
FlaxAutoModelForMaskedLM
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:FlaxAlbertForMaskedLM
(ALBERT model)BartConfig
configuration class:FlaxBartForConditionalGeneration
(BART model)BertConfig
configuration class:FlaxBertForMaskedLM
(BERT model)BigBirdConfig
configuration class:FlaxBigBirdForMaskedLM
(BigBird model)DistilBertConfig
configuration class:FlaxDistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class:FlaxElectraForMaskedLM
(ELECTRA model)MBartConfig
configuration class:FlaxMBartForConditionalGeneration
(mBART model)RobertaConfig
configuration class:FlaxRobertaForMaskedLM
(RoBERTa model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModelForMaskedLM.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
FlaxAlbertForMaskedLM
(ALBERT model)bart β
FlaxBartForConditionalGeneration
(BART model)bert β
FlaxBertForMaskedLM
(BERT model)big_bird β
FlaxBigBirdForMaskedLM
(BigBird model)distilbert β
FlaxDistilBertForMaskedLM
(DistilBERT model)electra β
FlaxElectraForMaskedLM
(ELECTRA model)mbart β
FlaxMBartForConditionalGeneration
(mBART model)roberta β
FlaxRobertaForMaskedLM
(RoBERTa model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForMaskedLM.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModelForMaskedLM.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModelForMaskedLM.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForSeq2SeqLMΒΆ
-
class
transformers.
FlaxAutoModelForSeq2SeqLM
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
BartConfig
configuration class:FlaxBartForConditionalGeneration
(BART model)EncoderDecoderConfig
configuration class:FlaxEncoderDecoderModel
(Encoder decoder model)MBartConfig
configuration class:FlaxMBartForConditionalGeneration
(mBART model)MT5Config
configuration class:FlaxMT5ForConditionalGeneration
(mT5 model)MarianConfig
configuration class:FlaxMarianMTModel
(Marian model)PegasusConfig
configuration class:FlaxPegasusForConditionalGeneration
(Pegasus model)T5Config
configuration class:FlaxT5ForConditionalGeneration
(T5 model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('t5-base') >>> model = FlaxAutoModelForSeq2SeqLM.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:bart β
FlaxBartForConditionalGeneration
(BART model)encoder-decoder β
FlaxEncoderDecoderModel
(Encoder decoder model)marian β
FlaxMarianMTModel
(Marian model)mbart β
FlaxMBartForConditionalGeneration
(mBART model)mt5 β
FlaxMT5ForConditionalGeneration
(mT5 model)pegasus β
FlaxPegasusForConditionalGeneration
(Pegasus model)t5 β
FlaxT5ForConditionalGeneration
(T5 model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained('t5-base') >>> # Update configuration during loading >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained('t5-base', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/t5_pt_model_config.json') >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained('./pt_model/t5_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForSequenceClassificationΒΆ
-
class
transformers.
FlaxAutoModelForSequenceClassification
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:FlaxAlbertForSequenceClassification
(ALBERT model)BartConfig
configuration class:FlaxBartForSequenceClassification
(BART model)BertConfig
configuration class:FlaxBertForSequenceClassification
(BERT model)BigBirdConfig
configuration class:FlaxBigBirdForSequenceClassification
(BigBird model)DistilBertConfig
configuration class:FlaxDistilBertForSequenceClassification
(DistilBERT model)ElectraConfig
configuration class:FlaxElectraForSequenceClassification
(ELECTRA model)MBartConfig
configuration class:FlaxMBartForSequenceClassification
(mBART model)RobertaConfig
configuration class:FlaxRobertaForSequenceClassification
(RoBERTa model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModelForSequenceClassification.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
FlaxAlbertForSequenceClassification
(ALBERT model)bart β
FlaxBartForSequenceClassification
(BART model)bert β
FlaxBertForSequenceClassification
(BERT model)big_bird β
FlaxBigBirdForSequenceClassification
(BigBird model)distilbert β
FlaxDistilBertForSequenceClassification
(DistilBERT model)electra β
FlaxElectraForSequenceClassification
(ELECTRA model)mbart β
FlaxMBartForSequenceClassification
(mBART model)roberta β
FlaxRobertaForSequenceClassification
(RoBERTa model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForSequenceClassification.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModelForSequenceClassification.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModelForSequenceClassification.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForQuestionAnsweringΒΆ
-
class
transformers.
FlaxAutoModelForQuestionAnswering
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:FlaxAlbertForQuestionAnswering
(ALBERT model)BartConfig
configuration class:FlaxBartForQuestionAnswering
(BART model)BertConfig
configuration class:FlaxBertForQuestionAnswering
(BERT model)BigBirdConfig
configuration class:FlaxBigBirdForQuestionAnswering
(BigBird model)DistilBertConfig
configuration class:FlaxDistilBertForQuestionAnswering
(DistilBERT model)ElectraConfig
configuration class:FlaxElectraForQuestionAnswering
(ELECTRA model)MBartConfig
configuration class:FlaxMBartForQuestionAnswering
(mBART model)RobertaConfig
configuration class:FlaxRobertaForQuestionAnswering
(RoBERTa model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModelForQuestionAnswering.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
FlaxAlbertForQuestionAnswering
(ALBERT model)bart β
FlaxBartForQuestionAnswering
(BART model)bert β
FlaxBertForQuestionAnswering
(BERT model)big_bird β
FlaxBigBirdForQuestionAnswering
(BigBird model)distilbert β
FlaxDistilBertForQuestionAnswering
(DistilBERT model)electra β
FlaxElectraForQuestionAnswering
(ELECTRA model)mbart β
FlaxMBartForQuestionAnswering
(mBART model)roberta β
FlaxRobertaForQuestionAnswering
(RoBERTa model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForQuestionAnswering.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModelForQuestionAnswering.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModelForQuestionAnswering.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForTokenClassificationΒΆ
-
class
transformers.
FlaxAutoModelForTokenClassification
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:FlaxAlbertForTokenClassification
(ALBERT model)BertConfig
configuration class:FlaxBertForTokenClassification
(BERT model)BigBirdConfig
configuration class:FlaxBigBirdForTokenClassification
(BigBird model)DistilBertConfig
configuration class:FlaxDistilBertForTokenClassification
(DistilBERT model)ElectraConfig
configuration class:FlaxElectraForTokenClassification
(ELECTRA model)RobertaConfig
configuration class:FlaxRobertaForTokenClassification
(RoBERTa model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModelForTokenClassification.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
FlaxAlbertForTokenClassification
(ALBERT model)bert β
FlaxBertForTokenClassification
(BERT model)big_bird β
FlaxBigBirdForTokenClassification
(BigBird model)distilbert β
FlaxDistilBertForTokenClassification
(DistilBERT model)electra β
FlaxElectraForTokenClassification
(ELECTRA model)roberta β
FlaxRobertaForTokenClassification
(RoBERTa model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForTokenClassification.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModelForTokenClassification.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModelForTokenClassification.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForMultipleChoiceΒΆ
-
class
transformers.
FlaxAutoModelForMultipleChoice
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
AlbertConfig
configuration class:FlaxAlbertForMultipleChoice
(ALBERT model)BertConfig
configuration class:FlaxBertForMultipleChoice
(BERT model)BigBirdConfig
configuration class:FlaxBigBirdForMultipleChoice
(BigBird model)DistilBertConfig
configuration class:FlaxDistilBertForMultipleChoice
(DistilBERT model)ElectraConfig
configuration class:FlaxElectraForMultipleChoice
(ELECTRA model)RobertaConfig
configuration class:FlaxRobertaForMultipleChoice
(RoBERTa model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModelForMultipleChoice.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:albert β
FlaxAlbertForMultipleChoice
(ALBERT model)bert β
FlaxBertForMultipleChoice
(BERT model)big_bird β
FlaxBigBirdForMultipleChoice
(BigBird model)distilbert β
FlaxDistilBertForMultipleChoice
(DistilBERT model)electra β
FlaxElectraForMultipleChoice
(ELECTRA model)roberta β
FlaxRobertaForMultipleChoice
(RoBERTa model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForMultipleChoice.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModelForMultipleChoice.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModelForMultipleChoice.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForNextSentencePredictionΒΆ
-
class
transformers.
FlaxAutoModelForNextSentencePrediction
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
BertConfig
configuration class:FlaxBertForNextSentencePrediction
(BERT model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModelForNextSentencePrediction.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:bert β
FlaxBertForNextSentencePrediction
(BERT model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod
FlaxAutoModelForImageClassificationΒΆ
-
class
transformers.
FlaxAutoModelForImageClassification
(*args, **kwargs)[source]ΒΆ This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the
from_pretrained()
class method or thefrom_config()
class method.This class cannot be instantiated directly using
__init__()
(throws an error).-
classmethod
from_config
(**kwargs)ΒΆ Instantiates one of the model classes of the library (with a image classification head) from a configuration.
Note
Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use
from_pretrained()
to load the model weights.- Parameters
config (
PretrainedConfig
) βThe model class to instantiate is selected based on the configuration class:
BeitConfig
configuration class:FlaxBeitForImageClassification
(BEiT model)ViTConfig
configuration class:FlaxViTForImageClassification
(ViT model)
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForImageClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained('bert-base-cased') >>> model = FlaxAutoModelForImageClassification.from_config(config)
-
classmethod
from_pretrained
(*model_args, **kwargs)ΒΆ Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.
The model class to instantiate is selected based on the
model_type
property of the config object (either passed as an argument or loaded frompretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching onpretrained_model_name_or_path
:beit β
FlaxBeitForImageClassification
(BEiT model)vit β
FlaxViTForImageClassification
(ViT model)
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βCan be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model
__init__()
method.config (
PretrainedConfig
, optional) βConfiguration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
str
oros.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) β Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) β Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) β
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying modelβs__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs__init__
function.
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForImageClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForImageClassification.from_pretrained('bert-base-cased') >>> # Update configuration during loading >>> model = FlaxAutoModelForImageClassification.from_pretrained('bert-base-cased', output_attentions=True) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained('./pt_model/bert_pt_model_config.json') >>> model = FlaxAutoModelForImageClassification.from_pretrained('./pt_model/bert_pytorch_model.bin', from_pt=True, config=config)
-
classmethod