Transformers documentation

Auto Classes

You are viewing v4.44.2 version. A newer version v4.46.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Auto Classes

多くの場合、from_pretrained()メソッドに与えられた事前学習済みモデルの名前やパスから、使用したいアーキテクチャを推測することができます。自動クラスはこの仕事をあなたに代わって行うためにここにありますので、事前学習済みの重み/設定/語彙への名前/パスを与えると自動的に関連するモデルを取得できます。

AutoConfigAutoModelAutoTokenizerのいずれかをインスタンス化すると、関連するアーキテクチャのクラスが直接作成されます。例えば、

model = AutoModel.from_pretrained("google-bert/bert-base-cased")

これはBertModelのインスタンスであるモデルを作成します。

各タスクごと、そして各バックエンド(PyTorch、TensorFlow、またはFlax)ごとにAutoModelのクラスが存在します。

自動クラスの拡張

それぞれの自動クラスには、カスタムクラスで拡張するためのメソッドがあります。例えば、NewModelというモデルのカスタムクラスを定義した場合、NewModelConfigを確保しておけばこのようにして自動クラスに追加することができます:

from transformers import AutoConfig, AutoModel

AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)

その後、通常どおりauto classesを使用することができるようになります!

あなたのNewModelConfigPretrainedConfigのサブクラスである場合、そのmodel_type属性がコンフィグを登録するときに使用するキー(ここでは"new-model")と同じに設定されていることを確認してください。

同様に、あなたのNewModelPreTrainedModelのサブクラスである場合、そのconfig_class属性がモデルを登録する際に使用するクラス(ここではNewModelConfig)と同じに設定されていることを確認してください。

AutoConfig

class transformers.AutoConfig

< >

( )

This is a generic configuration class that will be instantiated as one of the configuration classes of the library when created with the from_pretrained() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_pretrained

< >

( pretrained_model_name_or_path **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co.
    • A path to a directory containing a configuration file saved using the save_pretrained() method, or the save_pretrained() method, e.g., ./my_model_directory/.
    • A path or url to a saved configuration JSON file, e.g., ./my_model_directory/configuration.json.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final configuration object.

    If True, then this functions returns a Tuple(config, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs which has not been used to update config and is otherwise ignored.

  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • kwargs(additional keyword arguments, optional) — The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter.

Instantiate one of the configuration classes of the library from a pretrained model configuration.

The configuration class to instantiate is selected based on the model_type property of the config object that is loaded, or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertAlbertConfig (ALBERT model)
  • alignAlignConfig (ALIGN model)
  • altclipAltCLIPConfig (AltCLIP model)
  • audio-spectrogram-transformerASTConfig (Audio Spectrogram Transformer model)
  • autoformerAutoformerConfig (Autoformer model)
  • barkBarkConfig (Bark model)
  • bartBartConfig (BART model)
  • beitBeitConfig (BEiT model)
  • bertBertConfig (BERT model)
  • bert-generationBertGenerationConfig (Bert Generation model)
  • big_birdBigBirdConfig (BigBird model)
  • bigbird_pegasusBigBirdPegasusConfig (BigBird-Pegasus model)
  • biogptBioGptConfig (BioGpt model)
  • bitBitConfig (BiT model)
  • blenderbotBlenderbotConfig (Blenderbot model)
  • blenderbot-smallBlenderbotSmallConfig (BlenderbotSmall model)
  • blipBlipConfig (BLIP model)
  • blip-2Blip2Config (BLIP-2 model)
  • bloomBloomConfig (BLOOM model)
  • bridgetowerBridgeTowerConfig (BridgeTower model)
  • brosBrosConfig (BROS model)
  • camembertCamembertConfig (CamemBERT model)
  • canineCanineConfig (CANINE model)
  • chameleonChameleonConfig (Chameleon model)
  • chinese_clipChineseCLIPConfig (Chinese-CLIP model)
  • chinese_clip_vision_modelChineseCLIPVisionConfig (ChineseCLIPVisionModel model)
  • clapClapConfig (CLAP model)
  • clipCLIPConfig (CLIP model)
  • clip_vision_modelCLIPVisionConfig (CLIPVisionModel model)
  • clipsegCLIPSegConfig (CLIPSeg model)
  • clvpClvpConfig (CLVP model)
  • code_llamaLlamaConfig (CodeLlama model)
  • codegenCodeGenConfig (CodeGen model)
  • cohereCohereConfig (Cohere model)
  • conditional_detrConditionalDetrConfig (Conditional DETR model)
  • convbertConvBertConfig (ConvBERT model)
  • convnextConvNextConfig (ConvNeXT model)
  • convnextv2ConvNextV2Config (ConvNeXTV2 model)
  • cpmantCpmAntConfig (CPM-Ant model)
  • ctrlCTRLConfig (CTRL model)
  • cvtCvtConfig (CvT model)
  • data2vec-audioData2VecAudioConfig (Data2VecAudio model)
  • data2vec-textData2VecTextConfig (Data2VecText model)
  • data2vec-visionData2VecVisionConfig (Data2VecVision model)
  • dbrxDbrxConfig (DBRX model)
  • debertaDebertaConfig (DeBERTa model)
  • deberta-v2DebertaV2Config (DeBERTa-v2 model)
  • decision_transformerDecisionTransformerConfig (Decision Transformer model)
  • deformable_detrDeformableDetrConfig (Deformable DETR model)
  • deitDeiTConfig (DeiT model)
  • depth_anythingDepthAnythingConfig (Depth Anything model)
  • detaDetaConfig (DETA model)
  • detrDetrConfig (DETR model)
  • dinatDinatConfig (DiNAT model)
  • dinov2Dinov2Config (DINOv2 model)
  • distilbertDistilBertConfig (DistilBERT model)
  • donut-swinDonutSwinConfig (DonutSwin model)
  • dprDPRConfig (DPR model)
  • dptDPTConfig (DPT model)
  • efficientformerEfficientFormerConfig (EfficientFormer model)
  • efficientnetEfficientNetConfig (EfficientNet model)
  • electraElectraConfig (ELECTRA model)
  • encodecEncodecConfig (EnCodec model)
  • encoder-decoderEncoderDecoderConfig (Encoder decoder model)
  • ernieErnieConfig (ERNIE model)
  • ernie_mErnieMConfig (ErnieM model)
  • esmEsmConfig (ESM model)
  • falconFalconConfig (Falcon model)
  • fastspeech2_conformerFastSpeech2ConformerConfig (FastSpeech2Conformer model)
  • flaubertFlaubertConfig (FlauBERT model)
  • flavaFlavaConfig (FLAVA model)
  • fnetFNetConfig (FNet model)
  • focalnetFocalNetConfig (FocalNet model)
  • fsmtFSMTConfig (FairSeq Machine-Translation model)
  • funnelFunnelConfig (Funnel Transformer model)
  • fuyuFuyuConfig (Fuyu model)
  • gemmaGemmaConfig (Gemma model)
  • gemma2Gemma2Config (Gemma2 model)
  • gitGitConfig (GIT model)
  • glpnGLPNConfig (GLPN model)
  • gpt-sw3GPT2Config (GPT-Sw3 model)
  • gpt2GPT2Config (OpenAI GPT-2 model)
  • gpt_bigcodeGPTBigCodeConfig (GPTBigCode model)
  • gpt_neoGPTNeoConfig (GPT Neo model)
  • gpt_neoxGPTNeoXConfig (GPT NeoX model)
  • gpt_neox_japaneseGPTNeoXJapaneseConfig (GPT NeoX Japanese model)
  • gptjGPTJConfig (GPT-J model)
  • gptsan-japaneseGPTSanJapaneseConfig (GPTSAN-japanese model)
  • graphormerGraphormerConfig (Graphormer model)
  • grounding-dinoGroundingDinoConfig (Grounding DINO model)
  • groupvitGroupViTConfig (GroupViT model)
  • hieraHieraConfig (Hiera model)
  • hubertHubertConfig (Hubert model)
  • ibertIBertConfig (I-BERT model)
  • ideficsIdeficsConfig (IDEFICS model)
  • idefics2Idefics2Config (Idefics2 model)
  • imagegptImageGPTConfig (ImageGPT model)
  • informerInformerConfig (Informer model)
  • instructblipInstructBlipConfig (InstructBLIP model)
  • instructblipvideoInstructBlipVideoConfig (InstructBlipVideo model)
  • jambaJambaConfig (Jamba model)
  • jetmoeJetMoeConfig (JetMoe model)
  • jukeboxJukeboxConfig (Jukebox model)
  • kosmos-2Kosmos2Config (KOSMOS-2 model)
  • layoutlmLayoutLMConfig (LayoutLM model)
  • layoutlmv2LayoutLMv2Config (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3Config (LayoutLMv3 model)
  • ledLEDConfig (LED model)
  • levitLevitConfig (LeViT model)
  • liltLiltConfig (LiLT model)
  • llamaLlamaConfig (LLaMA model)
  • llavaLlavaConfig (LLaVa model)
  • llava-next-videoLlavaNextVideoConfig (LLaVa-NeXT-Video model)
  • llava_nextLlavaNextConfig (LLaVA-NeXT model)
  • longformerLongformerConfig (Longformer model)
  • longt5LongT5Config (LongT5 model)
  • lukeLukeConfig (LUKE model)
  • lxmertLxmertConfig (LXMERT model)
  • m2m_100M2M100Config (M2M100 model)
  • mambaMambaConfig (Mamba model)
  • mamba2Mamba2Config (mamba2 model)
  • marianMarianConfig (Marian model)
  • markuplmMarkupLMConfig (MarkupLM model)
  • mask2formerMask2FormerConfig (Mask2Former model)
  • maskformerMaskFormerConfig (MaskFormer model)
  • maskformer-swinMaskFormerSwinConfig (MaskFormerSwin model)
  • mbartMBartConfig (mBART model)
  • mctctMCTCTConfig (M-CTC-T model)
  • megaMegaConfig (MEGA model)
  • megatron-bertMegatronBertConfig (Megatron-BERT model)
  • mgp-strMgpstrConfig (MGP-STR model)
  • mistralMistralConfig (Mistral model)
  • mixtralMixtralConfig (Mixtral model)
  • mobilebertMobileBertConfig (MobileBERT model)
  • mobilenet_v1MobileNetV1Config (MobileNetV1 model)
  • mobilenet_v2MobileNetV2Config (MobileNetV2 model)
  • mobilevitMobileViTConfig (MobileViT model)
  • mobilevitv2MobileViTV2Config (MobileViTV2 model)
  • mpnetMPNetConfig (MPNet model)
  • mptMptConfig (MPT model)
  • mraMraConfig (MRA model)
  • mt5MT5Config (MT5 model)
  • musicgenMusicgenConfig (MusicGen model)
  • musicgen_melodyMusicgenMelodyConfig (MusicGen Melody model)
  • mvpMvpConfig (MVP model)
  • natNatConfig (NAT model)
  • nemotronNemotronConfig (Nemotron model)
  • nezhaNezhaConfig (Nezha model)
  • nllb-moeNllbMoeConfig (NLLB-MOE model)
  • nougatVisionEncoderDecoderConfig (Nougat model)
  • nystromformerNystromformerConfig (Nyströmformer model)
  • olmoOlmoConfig (OLMo model)
  • oneformerOneFormerConfig (OneFormer model)
  • open-llamaOpenLlamaConfig (OpenLlama model)
  • openai-gptOpenAIGPTConfig (OpenAI GPT model)
  • optOPTConfig (OPT model)
  • owlv2Owlv2Config (OWLv2 model)
  • owlvitOwlViTConfig (OWL-ViT model)
  • paligemmaPaliGemmaConfig (PaliGemma model)
  • patchtsmixerPatchTSMixerConfig (PatchTSMixer model)
  • patchtstPatchTSTConfig (PatchTST model)
  • pegasusPegasusConfig (Pegasus model)
  • pegasus_xPegasusXConfig (PEGASUS-X model)
  • perceiverPerceiverConfig (Perceiver model)
  • persimmonPersimmonConfig (Persimmon model)
  • phiPhiConfig (Phi model)
  • phi3Phi3Config (Phi3 model)
  • pix2structPix2StructConfig (Pix2Struct model)
  • plbartPLBartConfig (PLBart model)
  • poolformerPoolFormerConfig (PoolFormer model)
  • pop2pianoPop2PianoConfig (Pop2Piano model)
  • prophetnetProphetNetConfig (ProphetNet model)
  • pvtPvtConfig (PVT model)
  • pvt_v2PvtV2Config (PVTv2 model)
  • qdqbertQDQBertConfig (QDQBert model)
  • qwen2Qwen2Config (Qwen2 model)
  • qwen2_moeQwen2MoeConfig (Qwen2MoE model)
  • ragRagConfig (RAG model)
  • realmRealmConfig (REALM model)
  • recurrent_gemmaRecurrentGemmaConfig (RecurrentGemma model)
  • reformerReformerConfig (Reformer model)
  • regnetRegNetConfig (RegNet model)
  • rembertRemBertConfig (RemBERT model)
  • resnetResNetConfig (ResNet model)
  • retribertRetriBertConfig (RetriBERT model)
  • robertaRobertaConfig (RoBERTa model)
  • roberta-prelayernormRobertaPreLayerNormConfig (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertConfig (RoCBert model)
  • roformerRoFormerConfig (RoFormer model)
  • rt_detrRTDetrConfig (RT-DETR model)
  • rt_detr_resnetRTDetrResNetConfig (RT-DETR-ResNet model)
  • rwkvRwkvConfig (RWKV model)
  • samSamConfig (SAM model)
  • seamless_m4tSeamlessM4TConfig (SeamlessM4T model)
  • seamless_m4t_v2SeamlessM4Tv2Config (SeamlessM4Tv2 model)
  • segformerSegformerConfig (SegFormer model)
  • seggptSegGptConfig (SegGPT model)
  • sewSEWConfig (SEW model)
  • sew-dSEWDConfig (SEW-D model)
  • siglipSiglipConfig (SigLIP model)
  • siglip_vision_modelSiglipVisionConfig (SiglipVisionModel model)
  • speech-encoder-decoderSpeechEncoderDecoderConfig (Speech Encoder decoder model)
  • speech_to_textSpeech2TextConfig (Speech2Text model)
  • speech_to_text_2Speech2Text2Config (Speech2Text2 model)
  • speecht5SpeechT5Config (SpeechT5 model)
  • splinterSplinterConfig (Splinter model)
  • squeezebertSqueezeBertConfig (SqueezeBERT model)
  • stablelmStableLmConfig (StableLm model)
  • starcoder2Starcoder2Config (Starcoder2 model)
  • superpointSuperPointConfig (SuperPoint model)
  • swiftformerSwiftFormerConfig (SwiftFormer model)
  • swinSwinConfig (Swin Transformer model)
  • swin2srSwin2SRConfig (Swin2SR model)
  • swinv2Swinv2Config (Swin Transformer V2 model)
  • switch_transformersSwitchTransformersConfig (SwitchTransformers model)
  • t5T5Config (T5 model)
  • table-transformerTableTransformerConfig (Table Transformer model)
  • tapasTapasConfig (TAPAS model)
  • time_series_transformerTimeSeriesTransformerConfig (Time Series Transformer model)
  • timesformerTimesformerConfig (TimeSformer model)
  • timm_backboneTimmBackboneConfig (TimmBackbone model)
  • trajectory_transformerTrajectoryTransformerConfig (Trajectory Transformer model)
  • transfo-xlTransfoXLConfig (Transformer-XL model)
  • trocrTrOCRConfig (TrOCR model)
  • tvltTvltConfig (TVLT model)
  • tvpTvpConfig (TVP model)
  • udopUdopConfig (UDOP model)
  • umt5UMT5Config (UMT5 model)
  • unispeechUniSpeechConfig (UniSpeech model)
  • unispeech-satUniSpeechSatConfig (UniSpeechSat model)
  • univnetUnivNetConfig (UnivNet model)
  • upernetUperNetConfig (UPerNet model)
  • vanVanConfig (VAN model)
  • video_llavaVideoLlavaConfig (VideoLlava model)
  • videomaeVideoMAEConfig (VideoMAE model)
  • viltViltConfig (ViLT model)
  • vipllavaVipLlavaConfig (VipLlava model)
  • vision-encoder-decoderVisionEncoderDecoderConfig (Vision Encoder decoder model)
  • vision-text-dual-encoderVisionTextDualEncoderConfig (VisionTextDualEncoder model)
  • visual_bertVisualBertConfig (VisualBERT model)
  • vitViTConfig (ViT model)
  • vit_hybridViTHybridConfig (ViT Hybrid model)
  • vit_maeViTMAEConfig (ViTMAE model)
  • vit_msnViTMSNConfig (ViTMSN model)
  • vitdetVitDetConfig (VitDet model)
  • vitmatteVitMatteConfig (ViTMatte model)
  • vitsVitsConfig (VITS model)
  • vivitVivitConfig (ViViT model)
  • wav2vec2Wav2Vec2Config (Wav2Vec2 model)
  • wav2vec2-bertWav2Vec2BertConfig (Wav2Vec2-BERT model)
  • wav2vec2-conformerWav2Vec2ConformerConfig (Wav2Vec2-Conformer model)
  • wavlmWavLMConfig (WavLM model)
  • whisperWhisperConfig (Whisper model)
  • xclipXCLIPConfig (X-CLIP model)
  • xglmXGLMConfig (XGLM model)
  • xlmXLMConfig (XLM model)
  • xlm-prophetnetXLMProphetNetConfig (XLM-ProphetNet model)
  • xlm-robertaXLMRobertaConfig (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaXLConfig (XLM-RoBERTa-XL model)
  • xlnetXLNetConfig (XLNet model)
  • xmodXmodConfig (X-MOD model)
  • yolosYolosConfig (YOLOS model)
  • yosoYosoConfig (YOSO model)
  • zoedepthZoeDepthConfig (ZoeDepth model)

Examples:

>>> from transformers import AutoConfig

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-uncased")

>>> # Download configuration from huggingface.co (user-uploaded) and cache.
>>> config = AutoConfig.from_pretrained("dbmdz/bert-base-german-cased")

>>> # If configuration file is in a directory (e.g., was saved using *save_pretrained('./test/saved_model/')*).
>>> config = AutoConfig.from_pretrained("./test/bert_saved_model/")

>>> # Load a specific configuration file.
>>> config = AutoConfig.from_pretrained("./test/bert_saved_model/my_configuration.json")

>>> # Change some config attributes when loading a pretrained config.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-uncased", output_attentions=True, foo=False)
>>> config.output_attentions
True

>>> config, unused_kwargs = AutoConfig.from_pretrained(
...     "google-bert/bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True
... )
>>> config.output_attentions
True

>>> unused_kwargs
{'foo': False}

register

< >

( model_type config exist_ok = False )

Parameters

  • model_type (str) — The model type like “bert” or “gpt”.
  • config (PretrainedConfig) — The config to register.

Register a new configuration for this class.

AutoTokenizer

class transformers.AutoTokenizer

< >

( )

This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when created with the AutoTokenizer.from_pretrained() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_pretrained

< >

( pretrained_model_name_or_path *inputs **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co.
    • A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g., ./my_model_directory/.
    • A path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (like Bert or XLNet), e.g.: ./my_model_directory/vocab.txt. (Not applicable to all derived classes)
  • inputs (additional positional arguments, optional) — Will be passed along to the Tokenizer __init__() method.
  • config (PretrainedConfig, optional) — The configuration object used to determine the tokenizer class to instantiate.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • subfolder (str, optional) — In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here.
  • use_fast (bool, optional, defaults to True) — Use a fast Rust-based tokenizer if it is supported for a given model. If a fast tokenizer is not available for a given model, a normal Python-based tokenizer is returned instead.
  • tokenizer_type (str, optional) — Tokenizer type to be loaded.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • kwargs (additional keyword arguments, optional) — Will be passed to the Tokenizer __init__() method. Can be used to set special tokens like bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens. See parameters in the __init__() for more details.

Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary.

The tokenizer class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertAlbertTokenizer or AlbertTokenizerFast (ALBERT model)
  • alignBertTokenizer or BertTokenizerFast (ALIGN model)
  • barkBertTokenizer or BertTokenizerFast (Bark model)
  • bartBartTokenizer or BartTokenizerFast (BART model)
  • barthezBarthezTokenizer or BarthezTokenizerFast (BARThez model)
  • bartphoBartphoTokenizer (BARTpho model)
  • bertBertTokenizer or BertTokenizerFast (BERT model)
  • bert-generationBertGenerationTokenizer (Bert Generation model)
  • bert-japaneseBertJapaneseTokenizer (BertJapanese model)
  • bertweetBertweetTokenizer (BERTweet model)
  • big_birdBigBirdTokenizer or BigBirdTokenizerFast (BigBird model)
  • bigbird_pegasusPegasusTokenizer or PegasusTokenizerFast (BigBird-Pegasus model)
  • biogptBioGptTokenizer (BioGpt model)
  • blenderbotBlenderbotTokenizer or BlenderbotTokenizerFast (Blenderbot model)
  • blenderbot-smallBlenderbotSmallTokenizer (BlenderbotSmall model)
  • blipBertTokenizer or BertTokenizerFast (BLIP model)
  • blip-2GPT2Tokenizer or GPT2TokenizerFast (BLIP-2 model)
  • bloomBloomTokenizerFast (BLOOM model)
  • bridgetowerRobertaTokenizer or RobertaTokenizerFast (BridgeTower model)
  • brosBertTokenizer or BertTokenizerFast (BROS model)
  • byt5ByT5Tokenizer (ByT5 model)
  • camembertCamembertTokenizer or CamembertTokenizerFast (CamemBERT model)
  • canineCanineTokenizer (CANINE model)
  • chameleonLlamaTokenizer or LlamaTokenizerFast (Chameleon model)
  • chinese_clipBertTokenizer or BertTokenizerFast (Chinese-CLIP model)
  • clapRobertaTokenizer or RobertaTokenizerFast (CLAP model)
  • clipCLIPTokenizer or CLIPTokenizerFast (CLIP model)
  • clipsegCLIPTokenizer or CLIPTokenizerFast (CLIPSeg model)
  • clvpClvpTokenizer (CLVP model)
  • code_llamaCodeLlamaTokenizer or CodeLlamaTokenizerFast (CodeLlama model)
  • codegenCodeGenTokenizer or CodeGenTokenizerFast (CodeGen model)
  • cohereCohereTokenizerFast (Cohere model)
  • convbertConvBertTokenizer or ConvBertTokenizerFast (ConvBERT model)
  • cpmCpmTokenizer or CpmTokenizerFast (CPM model)
  • cpmantCpmAntTokenizer (CPM-Ant model)
  • ctrlCTRLTokenizer (CTRL model)
  • data2vec-audioWav2Vec2CTCTokenizer (Data2VecAudio model)
  • data2vec-textRobertaTokenizer or RobertaTokenizerFast (Data2VecText model)
  • dbrxGPT2Tokenizer or GPT2TokenizerFast (DBRX model)
  • debertaDebertaTokenizer or DebertaTokenizerFast (DeBERTa model)
  • deberta-v2DebertaV2Tokenizer or DebertaV2TokenizerFast (DeBERTa-v2 model)
  • distilbertDistilBertTokenizer or DistilBertTokenizerFast (DistilBERT model)
  • dprDPRQuestionEncoderTokenizer or DPRQuestionEncoderTokenizerFast (DPR model)
  • electraElectraTokenizer or ElectraTokenizerFast (ELECTRA model)
  • ernieBertTokenizer or BertTokenizerFast (ERNIE model)
  • ernie_mErnieMTokenizer (ErnieM model)
  • esmEsmTokenizer (ESM model)
  • falconPreTrainedTokenizerFast (Falcon model)
  • fastspeech2_conformer — (FastSpeech2Conformer model)
  • flaubertFlaubertTokenizer (FlauBERT model)
  • fnetFNetTokenizer or FNetTokenizerFast (FNet model)
  • fsmtFSMTTokenizer (FairSeq Machine-Translation model)
  • funnelFunnelTokenizer or FunnelTokenizerFast (Funnel Transformer model)
  • gemmaGemmaTokenizer or GemmaTokenizerFast (Gemma model)
  • gemma2GemmaTokenizer or GemmaTokenizerFast (Gemma2 model)
  • gitBertTokenizer or BertTokenizerFast (GIT model)
  • gpt-sw3GPTSw3Tokenizer (GPT-Sw3 model)
  • gpt2GPT2Tokenizer or GPT2TokenizerFast (OpenAI GPT-2 model)
  • gpt_bigcodeGPT2Tokenizer or GPT2TokenizerFast (GPTBigCode model)
  • gpt_neoGPT2Tokenizer or GPT2TokenizerFast (GPT Neo model)
  • gpt_neoxGPTNeoXTokenizerFast (GPT NeoX model)
  • gpt_neox_japaneseGPTNeoXJapaneseTokenizer (GPT NeoX Japanese model)
  • gptjGPT2Tokenizer or GPT2TokenizerFast (GPT-J model)
  • gptsan-japaneseGPTSanJapaneseTokenizer (GPTSAN-japanese model)
  • grounding-dinoBertTokenizer or BertTokenizerFast (Grounding DINO model)
  • groupvitCLIPTokenizer or CLIPTokenizerFast (GroupViT model)
  • herbertHerbertTokenizer or HerbertTokenizerFast (HerBERT model)
  • hubertWav2Vec2CTCTokenizer (Hubert model)
  • ibertRobertaTokenizer or RobertaTokenizerFast (I-BERT model)
  • ideficsLlamaTokenizerFast (IDEFICS model)
  • idefics2LlamaTokenizer or LlamaTokenizerFast (Idefics2 model)
  • instructblipGPT2Tokenizer or GPT2TokenizerFast (InstructBLIP model)
  • instructblipvideoGPT2Tokenizer or GPT2TokenizerFast (InstructBlipVideo model)
  • jambaLlamaTokenizer or LlamaTokenizerFast (Jamba model)
  • jetmoeLlamaTokenizer or LlamaTokenizerFast (JetMoe model)
  • jukeboxJukeboxTokenizer (Jukebox model)
  • kosmos-2XLMRobertaTokenizer or XLMRobertaTokenizerFast (KOSMOS-2 model)
  • layoutlmLayoutLMTokenizer or LayoutLMTokenizerFast (LayoutLM model)
  • layoutlmv2LayoutLMv2Tokenizer or LayoutLMv2TokenizerFast (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3Tokenizer or LayoutLMv3TokenizerFast (LayoutLMv3 model)
  • layoutxlmLayoutXLMTokenizer or LayoutXLMTokenizerFast (LayoutXLM model)
  • ledLEDTokenizer or LEDTokenizerFast (LED model)
  • liltLayoutLMv3Tokenizer or LayoutLMv3TokenizerFast (LiLT model)
  • llamaLlamaTokenizer or LlamaTokenizerFast (LLaMA model)
  • llavaLlamaTokenizer or LlamaTokenizerFast (LLaVa model)
  • llava-next-videoLlamaTokenizer or LlamaTokenizerFast (LLaVa-NeXT-Video model)
  • llava_nextLlamaTokenizer or LlamaTokenizerFast (LLaVA-NeXT model)
  • longformerLongformerTokenizer or LongformerTokenizerFast (Longformer model)
  • longt5T5Tokenizer or T5TokenizerFast (LongT5 model)
  • lukeLukeTokenizer (LUKE model)
  • lxmertLxmertTokenizer or LxmertTokenizerFast (LXMERT model)
  • m2m_100M2M100Tokenizer (M2M100 model)
  • mambaGPTNeoXTokenizerFast (Mamba model)
  • mamba2GPTNeoXTokenizerFast (mamba2 model)
  • marianMarianTokenizer (Marian model)
  • mbartMBartTokenizer or MBartTokenizerFast (mBART model)
  • mbart50MBart50Tokenizer or MBart50TokenizerFast (mBART-50 model)
  • megaRobertaTokenizer or RobertaTokenizerFast (MEGA model)
  • megatron-bertBertTokenizer or BertTokenizerFast (Megatron-BERT model)
  • mgp-strMgpstrTokenizer (MGP-STR model)
  • mistralLlamaTokenizer or LlamaTokenizerFast (Mistral model)
  • mixtralLlamaTokenizer or LlamaTokenizerFast (Mixtral model)
  • mlukeMLukeTokenizer (mLUKE model)
  • mobilebertMobileBertTokenizer or MobileBertTokenizerFast (MobileBERT model)
  • mpnetMPNetTokenizer or MPNetTokenizerFast (MPNet model)
  • mptGPTNeoXTokenizerFast (MPT model)
  • mraRobertaTokenizer or RobertaTokenizerFast (MRA model)
  • mt5MT5Tokenizer or MT5TokenizerFast (MT5 model)
  • musicgenT5Tokenizer or T5TokenizerFast (MusicGen model)
  • musicgen_melodyT5Tokenizer or T5TokenizerFast (MusicGen Melody model)
  • mvpMvpTokenizer or MvpTokenizerFast (MVP model)
  • nezhaBertTokenizer or BertTokenizerFast (Nezha model)
  • nllbNllbTokenizer or NllbTokenizerFast (NLLB model)
  • nllb-moeNllbTokenizer or NllbTokenizerFast (NLLB-MOE model)
  • nystromformerAlbertTokenizer or AlbertTokenizerFast (Nyströmformer model)
  • olmoGPTNeoXTokenizerFast (OLMo model)
  • oneformerCLIPTokenizer or CLIPTokenizerFast (OneFormer model)
  • openai-gptOpenAIGPTTokenizer or OpenAIGPTTokenizerFast (OpenAI GPT model)
  • optGPT2Tokenizer or GPT2TokenizerFast (OPT model)
  • owlv2CLIPTokenizer or CLIPTokenizerFast (OWLv2 model)
  • owlvitCLIPTokenizer or CLIPTokenizerFast (OWL-ViT model)
  • paligemmaLlamaTokenizer or LlamaTokenizerFast (PaliGemma model)
  • pegasusPegasusTokenizer or PegasusTokenizerFast (Pegasus model)
  • pegasus_xPegasusTokenizer or PegasusTokenizerFast (PEGASUS-X model)
  • perceiverPerceiverTokenizer (Perceiver model)
  • persimmonLlamaTokenizer or LlamaTokenizerFast (Persimmon model)
  • phiCodeGenTokenizer or CodeGenTokenizerFast (Phi model)
  • phi3LlamaTokenizer or LlamaTokenizerFast (Phi3 model)
  • phobertPhobertTokenizer (PhoBERT model)
  • pix2structT5Tokenizer or T5TokenizerFast (Pix2Struct model)
  • plbartPLBartTokenizer (PLBart model)
  • prophetnetProphetNetTokenizer (ProphetNet model)
  • qdqbertBertTokenizer or BertTokenizerFast (QDQBert model)
  • qwen2Qwen2Tokenizer or Qwen2TokenizerFast (Qwen2 model)
  • qwen2_moeQwen2Tokenizer or Qwen2TokenizerFast (Qwen2MoE model)
  • ragRagTokenizer (RAG model)
  • realmRealmTokenizer or RealmTokenizerFast (REALM model)
  • recurrent_gemmaGemmaTokenizer or GemmaTokenizerFast (RecurrentGemma model)
  • reformerReformerTokenizer or ReformerTokenizerFast (Reformer model)
  • rembertRemBertTokenizer or RemBertTokenizerFast (RemBERT model)
  • retribertRetriBertTokenizer or RetriBertTokenizerFast (RetriBERT model)
  • robertaRobertaTokenizer or RobertaTokenizerFast (RoBERTa model)
  • roberta-prelayernormRobertaTokenizer or RobertaTokenizerFast (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertTokenizer (RoCBert model)
  • roformerRoFormerTokenizer or RoFormerTokenizerFast (RoFormer model)
  • rwkvGPTNeoXTokenizerFast (RWKV model)
  • seamless_m4tSeamlessM4TTokenizer or SeamlessM4TTokenizerFast (SeamlessM4T model)
  • seamless_m4t_v2SeamlessM4TTokenizer or SeamlessM4TTokenizerFast (SeamlessM4Tv2 model)
  • siglipSiglipTokenizer (SigLIP model)
  • speech_to_textSpeech2TextTokenizer (Speech2Text model)
  • speech_to_text_2Speech2Text2Tokenizer (Speech2Text2 model)
  • speecht5SpeechT5Tokenizer (SpeechT5 model)
  • splinterSplinterTokenizer or SplinterTokenizerFast (Splinter model)
  • squeezebertSqueezeBertTokenizer or SqueezeBertTokenizerFast (SqueezeBERT model)
  • stablelmGPTNeoXTokenizerFast (StableLm model)
  • starcoder2GPT2Tokenizer or GPT2TokenizerFast (Starcoder2 model)
  • switch_transformersT5Tokenizer or T5TokenizerFast (SwitchTransformers model)
  • t5T5Tokenizer or T5TokenizerFast (T5 model)
  • tapasTapasTokenizer (TAPAS model)
  • tapexTapexTokenizer (TAPEX model)
  • transfo-xlTransfoXLTokenizer (Transformer-XL model)
  • tvpBertTokenizer or BertTokenizerFast (TVP model)
  • udopUdopTokenizer or UdopTokenizerFast (UDOP model)
  • umt5T5Tokenizer or T5TokenizerFast (UMT5 model)
  • video_llavaLlamaTokenizer or LlamaTokenizerFast (VideoLlava model)
  • viltBertTokenizer or BertTokenizerFast (ViLT model)
  • vipllavaLlamaTokenizer or LlamaTokenizerFast (VipLlava model)
  • visual_bertBertTokenizer or BertTokenizerFast (VisualBERT model)
  • vitsVitsTokenizer (VITS model)
  • wav2vec2Wav2Vec2CTCTokenizer (Wav2Vec2 model)
  • wav2vec2-bertWav2Vec2CTCTokenizer (Wav2Vec2-BERT model)
  • wav2vec2-conformerWav2Vec2CTCTokenizer (Wav2Vec2-Conformer model)
  • wav2vec2_phonemeWav2Vec2PhonemeCTCTokenizer (Wav2Vec2Phoneme model)
  • whisperWhisperTokenizer or WhisperTokenizerFast (Whisper model)
  • xclipCLIPTokenizer or CLIPTokenizerFast (X-CLIP model)
  • xglmXGLMTokenizer or XGLMTokenizerFast (XGLM model)
  • xlmXLMTokenizer (XLM model)
  • xlm-prophetnetXLMProphetNetTokenizer (XLM-ProphetNet model)
  • xlm-robertaXLMRobertaTokenizer or XLMRobertaTokenizerFast (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaTokenizer or XLMRobertaTokenizerFast (XLM-RoBERTa-XL model)
  • xlnetXLNetTokenizer or XLNetTokenizerFast (XLNet model)
  • xmodXLMRobertaTokenizer or XLMRobertaTokenizerFast (X-MOD model)
  • yosoAlbertTokenizer or AlbertTokenizerFast (YOSO model)

Examples:

>>> from transformers import AutoTokenizer

>>> # Download vocabulary from huggingface.co and cache.
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")

>>> # Download vocabulary from huggingface.co (user-uploaded) and cache.
>>> tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")

>>> # If vocabulary files are in a directory (e.g. tokenizer was saved using *save_pretrained('./test/saved_model/')*)
>>> # tokenizer = AutoTokenizer.from_pretrained("./test/bert_saved_model/")

>>> # Download vocabulary from huggingface.co and define model-specific arguments
>>> tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base", add_prefix_space=True)

register

< >

( config_class slow_tokenizer_class = None fast_tokenizer_class = None exist_ok = False )

Parameters

  • config_class (PretrainedConfig) — The configuration corresponding to the model to register.
  • slow_tokenizer_class (PretrainedTokenizer, optional) — The slow tokenizer to register.
  • fast_tokenizer_class (PretrainedTokenizerFast, optional) — The fast tokenizer to register.

Register a new tokenizer in this mapping.

AutoFeatureExtractor

class transformers.AutoFeatureExtractor

< >

( )

This is a generic feature extractor class that will be instantiated as one of the feature extractor classes of the library when created with the AutoFeatureExtractor.from_pretrained() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_pretrained

< >

( pretrained_model_name_or_path **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — This can be either:

    • a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co.
    • a path to a directory containing a feature extractor file saved using the save_pretrained() method, e.g., ./my_model_directory/.
    • a path or url to a saved feature extractor JSON file, e.g., ./my_model_directory/preprocessor_config.json.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used.
  • force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final feature extractor object. If True, then this functions returns a Tuple(feature_extractor, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of kwargs which has not been used to update feature_extractor and is otherwise ignored.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • kwargs (Dict[str, Any], optional) — The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is controlled by the return_unused_kwargs keyword parameter.

Instantiate one of the feature extractor classes of the library from a pretrained model vocabulary.

The feature extractor class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • audio-spectrogram-transformerASTFeatureExtractor (Audio Spectrogram Transformer model)
  • beitBeitFeatureExtractor (BEiT model)
  • chinese_clipChineseCLIPFeatureExtractor (Chinese-CLIP model)
  • clapClapFeatureExtractor (CLAP model)
  • clipCLIPFeatureExtractor (CLIP model)
  • clipsegViTFeatureExtractor (CLIPSeg model)
  • clvpClvpFeatureExtractor (CLVP model)
  • conditional_detrConditionalDetrFeatureExtractor (Conditional DETR model)
  • convnextConvNextFeatureExtractor (ConvNeXT model)
  • cvtConvNextFeatureExtractor (CvT model)
  • data2vec-audioWav2Vec2FeatureExtractor (Data2VecAudio model)
  • data2vec-visionBeitFeatureExtractor (Data2VecVision model)
  • deformable_detrDeformableDetrFeatureExtractor (Deformable DETR model)
  • deitDeiTFeatureExtractor (DeiT model)
  • detrDetrFeatureExtractor (DETR model)
  • dinatViTFeatureExtractor (DiNAT model)
  • donut-swinDonutFeatureExtractor (DonutSwin model)
  • dptDPTFeatureExtractor (DPT model)
  • encodecEncodecFeatureExtractor (EnCodec model)
  • flavaFlavaFeatureExtractor (FLAVA model)
  • glpnGLPNFeatureExtractor (GLPN model)
  • groupvitCLIPFeatureExtractor (GroupViT model)
  • hubertWav2Vec2FeatureExtractor (Hubert model)
  • imagegptImageGPTFeatureExtractor (ImageGPT model)
  • layoutlmv2LayoutLMv2FeatureExtractor (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3FeatureExtractor (LayoutLMv3 model)
  • levitLevitFeatureExtractor (LeViT model)
  • maskformerMaskFormerFeatureExtractor (MaskFormer model)
  • mctctMCTCTFeatureExtractor (M-CTC-T model)
  • mobilenet_v1MobileNetV1FeatureExtractor (MobileNetV1 model)
  • mobilenet_v2MobileNetV2FeatureExtractor (MobileNetV2 model)
  • mobilevitMobileViTFeatureExtractor (MobileViT model)
  • natViTFeatureExtractor (NAT model)
  • owlvitOwlViTFeatureExtractor (OWL-ViT model)
  • perceiverPerceiverFeatureExtractor (Perceiver model)
  • poolformerPoolFormerFeatureExtractor (PoolFormer model)
  • pop2pianoPop2PianoFeatureExtractor (Pop2Piano model)
  • regnetConvNextFeatureExtractor (RegNet model)
  • resnetConvNextFeatureExtractor (ResNet model)
  • seamless_m4tSeamlessM4TFeatureExtractor (SeamlessM4T model)
  • seamless_m4t_v2SeamlessM4TFeatureExtractor (SeamlessM4Tv2 model)
  • segformerSegformerFeatureExtractor (SegFormer model)
  • sewWav2Vec2FeatureExtractor (SEW model)
  • sew-dWav2Vec2FeatureExtractor (SEW-D model)
  • speech_to_textSpeech2TextFeatureExtractor (Speech2Text model)
  • speecht5SpeechT5FeatureExtractor (SpeechT5 model)
  • swiftformerViTFeatureExtractor (SwiftFormer model)
  • swinViTFeatureExtractor (Swin Transformer model)
  • swinv2ViTFeatureExtractor (Swin Transformer V2 model)
  • table-transformerDetrFeatureExtractor (Table Transformer model)
  • timesformerVideoMAEFeatureExtractor (TimeSformer model)
  • tvltTvltFeatureExtractor (TVLT model)
  • unispeechWav2Vec2FeatureExtractor (UniSpeech model)
  • unispeech-satWav2Vec2FeatureExtractor (UniSpeechSat model)
  • univnetUnivNetFeatureExtractor (UnivNet model)
  • vanConvNextFeatureExtractor (VAN model)
  • videomaeVideoMAEFeatureExtractor (VideoMAE model)
  • viltViltFeatureExtractor (ViLT model)
  • vitViTFeatureExtractor (ViT model)
  • vit_maeViTFeatureExtractor (ViTMAE model)
  • vit_msnViTFeatureExtractor (ViTMSN model)
  • wav2vec2Wav2Vec2FeatureExtractor (Wav2Vec2 model)
  • wav2vec2-bertWav2Vec2FeatureExtractor (Wav2Vec2-BERT model)
  • wav2vec2-conformerWav2Vec2FeatureExtractor (Wav2Vec2-Conformer model)
  • wavlmWav2Vec2FeatureExtractor (WavLM model)
  • whisperWhisperFeatureExtractor (Whisper model)
  • xclipCLIPFeatureExtractor (X-CLIP model)
  • yolosYolosFeatureExtractor (YOLOS model)

Passing token=True is required when you want to use a private model.

Examples:

>>> from transformers import AutoFeatureExtractor

>>> # Download feature extractor from huggingface.co and cache.
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")

>>> # If feature extractor files are in a directory (e.g. feature extractor was saved using *save_pretrained('./test/saved_model/')*)
>>> # feature_extractor = AutoFeatureExtractor.from_pretrained("./test/saved_model/")

register

< >

( config_class feature_extractor_class exist_ok = False )

Parameters

  • config_class (PretrainedConfig) — The configuration corresponding to the model to register.
  • feature_extractor_class (FeatureExtractorMixin) — The feature extractor to register.

Register a new feature extractor for this class.

AutoImageProcessor

class transformers.AutoImageProcessor

< >

( )

This is a generic image processor class that will be instantiated as one of the image processor classes of the library when created with the AutoImageProcessor.from_pretrained() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_pretrained

< >

( pretrained_model_name_or_path *inputs **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — This can be either:

    • a string, the model id of a pretrained image_processor hosted inside a model repo on huggingface.co.
    • a path to a directory containing a image processor file saved using the save_pretrained() method, e.g., ./my_model_directory/.
    • a path or url to a saved image processor JSON file, e.g., ./my_model_directory/preprocessor_config.json.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model image processor should be cached if the standard cache should not be used.
  • force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the image processor files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • use_fast (bool, optional, defaults to False) — Use a fast torchvision-base image processor if it is supported for a given model. If a fast tokenizer is not available for a given model, a normal numpy-based image processor is returned instead.
  • return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final image processor object. If True, then this functions returns a Tuple(image_processor, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not image processor attributes: i.e., the part of kwargs which has not been used to update image_processor and is otherwise ignored.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • kwargs (Dict[str, Any], optional) — The values in kwargs of any keys which are image processor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not image processor attributes is controlled by the return_unused_kwargs keyword parameter.

Instantiate one of the image processor classes of the library from a pretrained model vocabulary.

The image processor class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • alignEfficientNetImageProcessor (ALIGN model)
  • beitBeitImageProcessor (BEiT model)
  • bitBitImageProcessor (BiT model)
  • blipBlipImageProcessor (BLIP model)
  • blip-2BlipImageProcessor (BLIP-2 model)
  • bridgetowerBridgeTowerImageProcessor (BridgeTower model)
  • chameleonChameleonImageProcessor (Chameleon model)
  • chinese_clipChineseCLIPImageProcessor (Chinese-CLIP model)
  • clipCLIPImageProcessor (CLIP model)
  • clipsegViTImageProcessor or ViTImageProcessorFast (CLIPSeg model)
  • conditional_detrConditionalDetrImageProcessor (Conditional DETR model)
  • convnextConvNextImageProcessor (ConvNeXT model)
  • convnextv2ConvNextImageProcessor (ConvNeXTV2 model)
  • cvtConvNextImageProcessor (CvT model)
  • data2vec-visionBeitImageProcessor (Data2VecVision model)
  • deformable_detrDeformableDetrImageProcessor (Deformable DETR model)
  • deitDeiTImageProcessor (DeiT model)
  • depth_anythingDPTImageProcessor (Depth Anything model)
  • detaDetaImageProcessor (DETA model)
  • detrDetrImageProcessor (DETR model)
  • dinatViTImageProcessor or ViTImageProcessorFast (DiNAT model)
  • dinov2BitImageProcessor (DINOv2 model)
  • donut-swinDonutImageProcessor (DonutSwin model)
  • dptDPTImageProcessor (DPT model)
  • efficientformerEfficientFormerImageProcessor (EfficientFormer model)
  • efficientnetEfficientNetImageProcessor (EfficientNet model)
  • flavaFlavaImageProcessor (FLAVA model)
  • focalnetBitImageProcessor (FocalNet model)
  • fuyuFuyuImageProcessor (Fuyu model)
  • gitCLIPImageProcessor (GIT model)
  • glpnGLPNImageProcessor (GLPN model)
  • grounding-dinoGroundingDinoImageProcessor (Grounding DINO model)
  • groupvitCLIPImageProcessor (GroupViT model)
  • hieraBitImageProcessor (Hiera model)
  • ideficsIdeficsImageProcessor (IDEFICS model)
  • idefics2Idefics2ImageProcessor (Idefics2 model)
  • imagegptImageGPTImageProcessor (ImageGPT model)
  • instructblipBlipImageProcessor (InstructBLIP model)
  • instructblipvideoInstructBlipVideoImageProcessor (InstructBlipVideo model)
  • kosmos-2CLIPImageProcessor (KOSMOS-2 model)
  • layoutlmv2LayoutLMv2ImageProcessor (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3ImageProcessor (LayoutLMv3 model)
  • levitLevitImageProcessor (LeViT model)
  • llavaCLIPImageProcessor (LLaVa model)
  • llava-next-videoLlavaNextVideoImageProcessor (LLaVa-NeXT-Video model)
  • llava_nextLlavaNextImageProcessor (LLaVA-NeXT model)
  • mask2formerMask2FormerImageProcessor (Mask2Former model)
  • maskformerMaskFormerImageProcessor (MaskFormer model)
  • mgp-strViTImageProcessor or ViTImageProcessorFast (MGP-STR model)
  • mobilenet_v1MobileNetV1ImageProcessor (MobileNetV1 model)
  • mobilenet_v2MobileNetV2ImageProcessor (MobileNetV2 model)
  • mobilevitMobileViTImageProcessor (MobileViT model)
  • mobilevitv2MobileViTImageProcessor (MobileViTV2 model)
  • natViTImageProcessor or ViTImageProcessorFast (NAT model)
  • nougatNougatImageProcessor (Nougat model)
  • oneformerOneFormerImageProcessor (OneFormer model)
  • owlv2Owlv2ImageProcessor (OWLv2 model)
  • owlvitOwlViTImageProcessor (OWL-ViT model)
  • perceiverPerceiverImageProcessor (Perceiver model)
  • pix2structPix2StructImageProcessor (Pix2Struct model)
  • poolformerPoolFormerImageProcessor (PoolFormer model)
  • pvtPvtImageProcessor (PVT model)
  • pvt_v2PvtImageProcessor (PVTv2 model)
  • regnetConvNextImageProcessor (RegNet model)
  • resnetConvNextImageProcessor (ResNet model)
  • rt_detrR or T (RT-DETR model)
  • samSamImageProcessor (SAM model)
  • segformerSegformerImageProcessor (SegFormer model)
  • seggptSegGptImageProcessor (SegGPT model)
  • siglipSiglipImageProcessor (SigLIP model)
  • swiftformerViTImageProcessor or ViTImageProcessorFast (SwiftFormer model)
  • swinViTImageProcessor or ViTImageProcessorFast (Swin Transformer model)
  • swin2srSwin2SRImageProcessor (Swin2SR model)
  • swinv2ViTImageProcessor or ViTImageProcessorFast (Swin Transformer V2 model)
  • table-transformerDetrImageProcessor (Table Transformer model)
  • timesformerVideoMAEImageProcessor (TimeSformer model)
  • tvltTvltImageProcessor (TVLT model)
  • tvpTvpImageProcessor (TVP model)
  • udopLayoutLMv3ImageProcessor (UDOP model)
  • upernetSegformerImageProcessor (UPerNet model)
  • vanConvNextImageProcessor (VAN model)
  • videomaeVideoMAEImageProcessor (VideoMAE model)
  • viltViltImageProcessor (ViLT model)
  • vipllavaCLIPImageProcessor (VipLlava model)
  • vitViTImageProcessor or ViTImageProcessorFast (ViT model)
  • vit_hybridViTHybridImageProcessor (ViT Hybrid model)
  • vit_maeViTImageProcessor or ViTImageProcessorFast (ViTMAE model)
  • vit_msnViTImageProcessor or ViTImageProcessorFast (ViTMSN model)
  • vitmatteVitMatteImageProcessor (ViTMatte model)
  • xclipCLIPImageProcessor (X-CLIP model)
  • yolosYolosImageProcessor (YOLOS model)
  • zoedepthZoeDepthImageProcessor (ZoeDepth model)

Passing token=True is required when you want to use a private model.

Examples:

>>> from transformers import AutoImageProcessor

>>> # Download image processor from huggingface.co and cache.
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")

>>> # If image processor files are in a directory (e.g. image processor was saved using *save_pretrained('./test/saved_model/')*)
>>> # image_processor = AutoImageProcessor.from_pretrained("./test/saved_model/")

register

< >

( config_class image_processor_class = None slow_image_processor_class = None fast_image_processor_class = None exist_ok = False )

Parameters

Register a new image processor for this class.

AutoProcessor

class transformers.AutoProcessor

< >

( )

This is a generic processor class that will be instantiated as one of the processor classes of the library when created with the AutoProcessor.from_pretrained() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_pretrained

< >

( pretrained_model_name_or_path **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — This can be either:

    • a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co.
    • a path to a directory containing a processor files saved using the save_pretrained() method, e.g., ./my_model_directory/.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used.
  • force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final feature extractor object. If True, then this functions returns a Tuple(feature_extractor, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of kwargs which has not been used to update feature_extractor and is otherwise ignored.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • kwargs (Dict[str, Any], optional) — The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is controlled by the return_unused_kwargs keyword parameter.

Instantiate one of the processor classes of the library from a pretrained model vocabulary.

The processor class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible):

  • alignAlignProcessor (ALIGN model)
  • altclipAltCLIPProcessor (AltCLIP model)
  • barkBarkProcessor (Bark model)
  • blipBlipProcessor (BLIP model)
  • blip-2Blip2Processor (BLIP-2 model)
  • bridgetowerBridgeTowerProcessor (BridgeTower model)
  • chameleonChameleonProcessor (Chameleon model)
  • chinese_clipChineseCLIPProcessor (Chinese-CLIP model)
  • clapClapProcessor (CLAP model)
  • clipCLIPProcessor (CLIP model)
  • clipsegCLIPSegProcessor (CLIPSeg model)
  • clvpClvpProcessor (CLVP model)
  • flavaFlavaProcessor (FLAVA model)
  • fuyuFuyuProcessor (Fuyu model)
  • gitGitProcessor (GIT model)
  • grounding-dinoGroundingDinoProcessor (Grounding DINO model)
  • groupvitCLIPProcessor (GroupViT model)
  • hubertWav2Vec2Processor (Hubert model)
  • ideficsIdeficsProcessor (IDEFICS model)
  • idefics2Idefics2Processor (Idefics2 model)
  • instructblipInstructBlipProcessor (InstructBLIP model)
  • instructblipvideoInstructBlipVideoProcessor (InstructBlipVideo model)
  • kosmos-2Kosmos2Processor (KOSMOS-2 model)
  • layoutlmv2LayoutLMv2Processor (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3Processor (LayoutLMv3 model)
  • llavaLlavaProcessor (LLaVa model)
  • llava-next-videoLlavaNextVideoProcessor (LLaVa-NeXT-Video model)
  • llava_nextLlavaNextProcessor (LLaVA-NeXT model)
  • markuplmMarkupLMProcessor (MarkupLM model)
  • mctctMCTCTProcessor (M-CTC-T model)
  • mgp-strMgpstrProcessor (MGP-STR model)
  • oneformerOneFormerProcessor (OneFormer model)
  • owlv2Owlv2Processor (OWLv2 model)
  • owlvitOwlViTProcessor (OWL-ViT model)
  • paligemmaPaliGemmaProcessor (PaliGemma model)
  • pix2structPix2StructProcessor (Pix2Struct model)
  • pop2pianoPop2PianoProcessor (Pop2Piano model)
  • samSamProcessor (SAM model)
  • seamless_m4tSeamlessM4TProcessor (SeamlessM4T model)
  • sewWav2Vec2Processor (SEW model)
  • sew-dWav2Vec2Processor (SEW-D model)
  • siglipSiglipProcessor (SigLIP model)
  • speech_to_textSpeech2TextProcessor (Speech2Text model)
  • speech_to_text_2Speech2Text2Processor (Speech2Text2 model)
  • speecht5SpeechT5Processor (SpeechT5 model)
  • trocrTrOCRProcessor (TrOCR model)
  • tvltTvltProcessor (TVLT model)
  • tvpTvpProcessor (TVP model)
  • unispeechWav2Vec2Processor (UniSpeech model)
  • unispeech-satWav2Vec2Processor (UniSpeechSat model)
  • video_llavaVideoLlavaProcessor (VideoLlava model)
  • viltViltProcessor (ViLT model)
  • vipllavaLlavaProcessor (VipLlava model)
  • vision-text-dual-encoderVisionTextDualEncoderProcessor (VisionTextDualEncoder model)
  • wav2vec2Wav2Vec2Processor (Wav2Vec2 model)
  • wav2vec2-bertWav2Vec2Processor (Wav2Vec2-BERT model)
  • wav2vec2-conformerWav2Vec2Processor (Wav2Vec2-Conformer model)
  • wavlmWav2Vec2Processor (WavLM model)
  • whisperWhisperProcessor (Whisper model)
  • xclipXCLIPProcessor (X-CLIP model)

Passing token=True is required when you want to use a private model.

Examples:

>>> from transformers import AutoProcessor

>>> # Download processor from huggingface.co and cache.
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")

>>> # If processor files are in a directory (e.g. processor was saved using *save_pretrained('./test/saved_model/')*)
>>> # processor = AutoProcessor.from_pretrained("./test/saved_model/")

register

< >

( config_class processor_class exist_ok = False )

Parameters

  • config_class (PretrainedConfig) — The configuration corresponding to the model to register.
  • processor_class (FeatureExtractorMixin) — The processor to register.

Register a new processor for this class.

Generic model classes

以下の自動クラスは、特定のヘッドを持たないベースモデルクラスをインスタンス化するために利用可能です。

AutoModel

class transformers.AutoModel

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • ASTConfig configuration class: ASTModel (Audio Spectrogram Transformer model)
    • AlbertConfig configuration class: AlbertModel (ALBERT model)
    • AlignConfig configuration class: AlignModel (ALIGN model)
    • AltCLIPConfig configuration class: AltCLIPModel (AltCLIP model)
    • AutoformerConfig configuration class: AutoformerModel (Autoformer model)
    • BarkConfig configuration class: BarkModel (Bark model)
    • BartConfig configuration class: BartModel (BART model)
    • BeitConfig configuration class: BeitModel (BEiT model)
    • BertConfig configuration class: BertModel (BERT model)
    • BertGenerationConfig configuration class: BertGenerationEncoder (Bert Generation model)
    • BigBirdConfig configuration class: BigBirdModel (BigBird model)
    • BigBirdPegasusConfig configuration class: BigBirdPegasusModel (BigBird-Pegasus model)
    • BioGptConfig configuration class: BioGptModel (BioGpt model)
    • BitConfig configuration class: BitModel (BiT model)
    • BlenderbotConfig configuration class: BlenderbotModel (Blenderbot model)
    • BlenderbotSmallConfig configuration class: BlenderbotSmallModel (BlenderbotSmall model)
    • Blip2Config configuration class: Blip2Model (BLIP-2 model)
    • BlipConfig configuration class: BlipModel (BLIP model)
    • BloomConfig configuration class: BloomModel (BLOOM model)
    • BridgeTowerConfig configuration class: BridgeTowerModel (BridgeTower model)
    • BrosConfig configuration class: BrosModel (BROS model)
    • CLIPConfig configuration class: CLIPModel (CLIP model)
    • CLIPSegConfig configuration class: CLIPSegModel (CLIPSeg model)
    • CLIPVisionConfig configuration class: CLIPVisionModel (CLIPVisionModel model)
    • CTRLConfig configuration class: CTRLModel (CTRL model)
    • CamembertConfig configuration class: CamembertModel (CamemBERT model)
    • CanineConfig configuration class: CanineModel (CANINE model)
    • ChameleonConfig configuration class: ChameleonModel (Chameleon model)
    • ChineseCLIPConfig configuration class: ChineseCLIPModel (Chinese-CLIP model)
    • ChineseCLIPVisionConfig configuration class: ChineseCLIPVisionModel (ChineseCLIPVisionModel model)
    • ClapConfig configuration class: ClapModel (CLAP model)
    • ClvpConfig configuration class: ClvpModelForConditionalGeneration (CLVP model)
    • CodeGenConfig configuration class: CodeGenModel (CodeGen model)
    • CohereConfig configuration class: CohereModel (Cohere model)
    • ConditionalDetrConfig configuration class: ConditionalDetrModel (Conditional DETR model)
    • ConvBertConfig configuration class: ConvBertModel (ConvBERT model)
    • ConvNextConfig configuration class: ConvNextModel (ConvNeXT model)
    • ConvNextV2Config configuration class: ConvNextV2Model (ConvNeXTV2 model)
    • CpmAntConfig configuration class: CpmAntModel (CPM-Ant model)
    • CvtConfig configuration class: CvtModel (CvT model)
    • DPRConfig configuration class: DPRQuestionEncoder (DPR model)
    • DPTConfig configuration class: DPTModel (DPT model)
    • Data2VecAudioConfig configuration class: Data2VecAudioModel (Data2VecAudio model)
    • Data2VecTextConfig configuration class: Data2VecTextModel (Data2VecText model)
    • Data2VecVisionConfig configuration class: Data2VecVisionModel (Data2VecVision model)
    • DbrxConfig configuration class: DbrxModel (DBRX model)
    • DebertaConfig configuration class: DebertaModel (DeBERTa model)
    • DebertaV2Config configuration class: DebertaV2Model (DeBERTa-v2 model)
    • DecisionTransformerConfig configuration class: DecisionTransformerModel (Decision Transformer model)
    • DeformableDetrConfig configuration class: DeformableDetrModel (Deformable DETR model)
    • DeiTConfig configuration class: DeiTModel (DeiT model)
    • DetaConfig configuration class: DetaModel (DETA model)
    • DetrConfig configuration class: DetrModel (DETR model)
    • DinatConfig configuration class: DinatModel (DiNAT model)
    • Dinov2Config configuration class: Dinov2Model (DINOv2 model)
    • DistilBertConfig configuration class: DistilBertModel (DistilBERT model)
    • DonutSwinConfig configuration class: DonutSwinModel (DonutSwin model)
    • EfficientFormerConfig configuration class: EfficientFormerModel (EfficientFormer model)
    • EfficientNetConfig configuration class: EfficientNetModel (EfficientNet model)
    • ElectraConfig configuration class: ElectraModel (ELECTRA model)
    • EncodecConfig configuration class: EncodecModel (EnCodec model)
    • ErnieConfig configuration class: ErnieModel (ERNIE model)
    • ErnieMConfig configuration class: ErnieMModel (ErnieM model)
    • EsmConfig configuration class: EsmModel (ESM model)
    • FNetConfig configuration class: FNetModel (FNet model)
    • FSMTConfig configuration class: FSMTModel (FairSeq Machine-Translation model)
    • FalconConfig configuration class: FalconModel (Falcon model)
    • FastSpeech2ConformerConfig configuration class: FastSpeech2ConformerModel (FastSpeech2Conformer model)
    • FlaubertConfig configuration class: FlaubertModel (FlauBERT model)
    • FlavaConfig configuration class: FlavaModel (FLAVA model)
    • FocalNetConfig configuration class: FocalNetModel (FocalNet model)
    • FunnelConfig configuration class: FunnelModel or FunnelBaseModel (Funnel Transformer model)
    • GLPNConfig configuration class: GLPNModel (GLPN model)
    • GPT2Config configuration class: GPT2Model (OpenAI GPT-2 model)
    • GPTBigCodeConfig configuration class: GPTBigCodeModel (GPTBigCode model)
    • GPTJConfig configuration class: GPTJModel (GPT-J model)
    • GPTNeoConfig configuration class: GPTNeoModel (GPT Neo model)
    • GPTNeoXConfig configuration class: GPTNeoXModel (GPT NeoX model)
    • GPTNeoXJapaneseConfig configuration class: GPTNeoXJapaneseModel (GPT NeoX Japanese model)
    • GPTSanJapaneseConfig configuration class: GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model)
    • Gemma2Config configuration class: Gemma2Model (Gemma2 model)
    • GemmaConfig configuration class: GemmaModel (Gemma model)
    • GitConfig configuration class: GitModel (GIT model)
    • GraphormerConfig configuration class: GraphormerModel (Graphormer model)
    • GroundingDinoConfig configuration class: GroundingDinoModel (Grounding DINO model)
    • GroupViTConfig configuration class: GroupViTModel (GroupViT model)
    • HieraConfig configuration class: HieraModel (Hiera model)
    • HubertConfig configuration class: HubertModel (Hubert model)
    • IBertConfig configuration class: IBertModel (I-BERT model)
    • Idefics2Config configuration class: Idefics2Model (Idefics2 model)
    • IdeficsConfig configuration class: IdeficsModel (IDEFICS model)
    • ImageGPTConfig configuration class: ImageGPTModel (ImageGPT model)
    • InformerConfig configuration class: InformerModel (Informer model)
    • JambaConfig configuration class: JambaModel (Jamba model)
    • JetMoeConfig configuration class: JetMoeModel (JetMoe model)
    • JukeboxConfig configuration class: JukeboxModel (Jukebox model)
    • Kosmos2Config configuration class: Kosmos2Model (KOSMOS-2 model)
    • LEDConfig configuration class: LEDModel (LED model)
    • LayoutLMConfig configuration class: LayoutLMModel (LayoutLM model)
    • LayoutLMv2Config configuration class: LayoutLMv2Model (LayoutLMv2 model)
    • LayoutLMv3Config configuration class: LayoutLMv3Model (LayoutLMv3 model)
    • LevitConfig configuration class: LevitModel (LeViT model)
    • LiltConfig configuration class: LiltModel (LiLT model)
    • LlamaConfig configuration class: LlamaModel (LLaMA model)
    • LongT5Config configuration class: LongT5Model (LongT5 model)
    • LongformerConfig configuration class: LongformerModel (Longformer model)
    • LukeConfig configuration class: LukeModel (LUKE model)
    • LxmertConfig configuration class: LxmertModel (LXMERT model)
    • M2M100Config configuration class: M2M100Model (M2M100 model)
    • MBartConfig configuration class: MBartModel (mBART model)
    • MCTCTConfig configuration class: MCTCTModel (M-CTC-T model)
    • MPNetConfig configuration class: MPNetModel (MPNet model)
    • MT5Config configuration class: MT5Model (MT5 model)
    • Mamba2Config configuration class: Mamba2Model (mamba2 model)
    • MambaConfig configuration class: MambaModel (Mamba model)
    • MarianConfig configuration class: MarianModel (Marian model)
    • MarkupLMConfig configuration class: MarkupLMModel (MarkupLM model)
    • Mask2FormerConfig configuration class: Mask2FormerModel (Mask2Former model)
    • MaskFormerConfig configuration class: MaskFormerModel (MaskFormer model)
    • MaskFormerSwinConfig configuration class: MaskFormerSwinModel (MaskFormerSwin model)
    • MegaConfig configuration class: MegaModel (MEGA model)
    • MegatronBertConfig configuration class: MegatronBertModel (Megatron-BERT model)
    • MgpstrConfig configuration class: MgpstrForSceneTextRecognition (MGP-STR model)
    • MistralConfig configuration class: MistralModel (Mistral model)
    • MixtralConfig configuration class: MixtralModel (Mixtral model)
    • MobileBertConfig configuration class: MobileBertModel (MobileBERT model)
    • MobileNetV1Config configuration class: MobileNetV1Model (MobileNetV1 model)
    • MobileNetV2Config configuration class: MobileNetV2Model (MobileNetV2 model)
    • MobileViTConfig configuration class: MobileViTModel (MobileViT model)
    • MobileViTV2Config configuration class: MobileViTV2Model (MobileViTV2 model)
    • MptConfig configuration class: MptModel (MPT model)
    • MraConfig configuration class: MraModel (MRA model)
    • MusicgenConfig configuration class: MusicgenModel (MusicGen model)
    • MusicgenMelodyConfig configuration class: MusicgenMelodyModel (MusicGen Melody model)
    • MvpConfig configuration class: MvpModel (MVP model)
    • NatConfig configuration class: NatModel (NAT model)
    • NemotronConfig configuration class: NemotronModel (Nemotron model)
    • NezhaConfig configuration class: NezhaModel (Nezha model)
    • NllbMoeConfig configuration class: NllbMoeModel (NLLB-MOE model)
    • NystromformerConfig configuration class: NystromformerModel (Nyströmformer model)
    • OPTConfig configuration class: OPTModel (OPT model)
    • OlmoConfig configuration class: OlmoModel (OLMo model)
    • OneFormerConfig configuration class: OneFormerModel (OneFormer model)
    • OpenAIGPTConfig configuration class: OpenAIGPTModel (OpenAI GPT model)
    • OpenLlamaConfig configuration class: OpenLlamaModel (OpenLlama model)
    • OwlViTConfig configuration class: OwlViTModel (OWL-ViT model)
    • Owlv2Config configuration class: Owlv2Model (OWLv2 model)
    • PLBartConfig configuration class: PLBartModel (PLBart model)
    • PatchTSMixerConfig configuration class: PatchTSMixerModel (PatchTSMixer model)
    • PatchTSTConfig configuration class: PatchTSTModel (PatchTST model)
    • PegasusConfig configuration class: PegasusModel (Pegasus model)
    • PegasusXConfig configuration class: PegasusXModel (PEGASUS-X model)
    • PerceiverConfig configuration class: PerceiverModel (Perceiver model)
    • PersimmonConfig configuration class: PersimmonModel (Persimmon model)
    • Phi3Config configuration class: Phi3Model (Phi3 model)
    • PhiConfig configuration class: PhiModel (Phi model)
    • PoolFormerConfig configuration class: PoolFormerModel (PoolFormer model)
    • ProphetNetConfig configuration class: ProphetNetModel (ProphetNet model)
    • PvtConfig configuration class: PvtModel (PVT model)
    • PvtV2Config configuration class: PvtV2Model (PVTv2 model)
    • QDQBertConfig configuration class: QDQBertModel (QDQBert model)
    • Qwen2Config configuration class: Qwen2Model (Qwen2 model)
    • Qwen2MoeConfig configuration class: Qwen2MoeModel (Qwen2MoE model)
    • RTDetrConfig configuration class: RTDetrModel (RT-DETR model)
    • RecurrentGemmaConfig configuration class: RecurrentGemmaModel (RecurrentGemma model)
    • ReformerConfig configuration class: ReformerModel (Reformer model)
    • RegNetConfig configuration class: RegNetModel (RegNet model)
    • RemBertConfig configuration class: RemBertModel (RemBERT model)
    • ResNetConfig configuration class: ResNetModel (ResNet model)
    • RetriBertConfig configuration class: RetriBertModel (RetriBERT model)
    • RoCBertConfig configuration class: RoCBertModel (RoCBert model)
    • RoFormerConfig configuration class: RoFormerModel (RoFormer model)
    • RobertaConfig configuration class: RobertaModel (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormModel (RoBERTa-PreLayerNorm model)
    • RwkvConfig configuration class: RwkvModel (RWKV model)
    • SEWConfig configuration class: SEWModel (SEW model)
    • SEWDConfig configuration class: SEWDModel (SEW-D model)
    • SamConfig configuration class: SamModel (SAM model)
    • SeamlessM4TConfig configuration class: SeamlessM4TModel (SeamlessM4T model)
    • SeamlessM4Tv2Config configuration class: SeamlessM4Tv2Model (SeamlessM4Tv2 model)
    • SegGptConfig configuration class: SegGptModel (SegGPT model)
    • SegformerConfig configuration class: SegformerModel (SegFormer model)
    • SiglipConfig configuration class: SiglipModel (SigLIP model)
    • SiglipVisionConfig configuration class: SiglipVisionModel (SiglipVisionModel model)
    • Speech2TextConfig configuration class: Speech2TextModel (Speech2Text model)
    • SpeechT5Config configuration class: SpeechT5Model (SpeechT5 model)
    • SplinterConfig configuration class: SplinterModel (Splinter model)
    • SqueezeBertConfig configuration class: SqueezeBertModel (SqueezeBERT model)
    • StableLmConfig configuration class: StableLmModel (StableLm model)
    • Starcoder2Config configuration class: Starcoder2Model (Starcoder2 model)
    • SwiftFormerConfig configuration class: SwiftFormerModel (SwiftFormer model)
    • Swin2SRConfig configuration class: Swin2SRModel (Swin2SR model)
    • SwinConfig configuration class: SwinModel (Swin Transformer model)
    • Swinv2Config configuration class: Swinv2Model (Swin Transformer V2 model)
    • SwitchTransformersConfig configuration class: SwitchTransformersModel (SwitchTransformers model)
    • T5Config configuration class: T5Model (T5 model)
    • TableTransformerConfig configuration class: TableTransformerModel (Table Transformer model)
    • TapasConfig configuration class: TapasModel (TAPAS model)
    • TimeSeriesTransformerConfig configuration class: TimeSeriesTransformerModel (Time Series Transformer model)
    • TimesformerConfig configuration class: TimesformerModel (TimeSformer model)
    • TimmBackboneConfig configuration class: TimmBackbone (TimmBackbone model)
    • TrajectoryTransformerConfig configuration class: TrajectoryTransformerModel (Trajectory Transformer model)
    • TransfoXLConfig configuration class: TransfoXLModel (Transformer-XL model)
    • TvltConfig configuration class: TvltModel (TVLT model)
    • TvpConfig configuration class: TvpModel (TVP model)
    • UMT5Config configuration class: UMT5Model (UMT5 model)
    • UdopConfig configuration class: UdopModel (UDOP model)
    • UniSpeechConfig configuration class: UniSpeechModel (UniSpeech model)
    • UniSpeechSatConfig configuration class: UniSpeechSatModel (UniSpeechSat model)
    • UnivNetConfig configuration class: UnivNetModel (UnivNet model)
    • VanConfig configuration class: VanModel (VAN model)
    • ViTConfig configuration class: ViTModel (ViT model)
    • ViTHybridConfig configuration class: ViTHybridModel (ViT Hybrid model)
    • ViTMAEConfig configuration class: ViTMAEModel (ViTMAE model)
    • ViTMSNConfig configuration class: ViTMSNModel (ViTMSN model)
    • VideoMAEConfig configuration class: VideoMAEModel (VideoMAE model)
    • ViltConfig configuration class: ViltModel (ViLT model)
    • VisionTextDualEncoderConfig configuration class: VisionTextDualEncoderModel (VisionTextDualEncoder model)
    • VisualBertConfig configuration class: VisualBertModel (VisualBERT model)
    • VitDetConfig configuration class: VitDetModel (VitDet model)
    • VitsConfig configuration class: VitsModel (VITS model)
    • VivitConfig configuration class: VivitModel (ViViT model)
    • Wav2Vec2BertConfig configuration class: Wav2Vec2BertModel (Wav2Vec2-BERT model)
    • Wav2Vec2Config configuration class: Wav2Vec2Model (Wav2Vec2 model)
    • Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerModel (Wav2Vec2-Conformer model)
    • WavLMConfig configuration class: WavLMModel (WavLM model)
    • WhisperConfig configuration class: WhisperModel (Whisper model)
    • XCLIPConfig configuration class: XCLIPModel (X-CLIP model)
    • XGLMConfig configuration class: XGLMModel (XGLM model)
    • XLMConfig configuration class: XLMModel (XLM model)
    • XLMProphetNetConfig configuration class: XLMProphetNetModel (XLM-ProphetNet model)
    • XLMRobertaConfig configuration class: XLMRobertaModel (XLM-RoBERTa model)
    • XLMRobertaXLConfig configuration class: XLMRobertaXLModel (XLM-RoBERTa-XL model)
    • XLNetConfig configuration class: XLNetModel (XLNet model)
    • XmodConfig configuration class: XmodModel (X-MOD model)
    • YolosConfig configuration class: YolosModel (YOLOS model)
    • YosoConfig configuration class: YosoModel (YOSO model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the base model classes of the library from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModel

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModel.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the base model classes of the library from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertAlbertModel (ALBERT model)
  • alignAlignModel (ALIGN model)
  • altclipAltCLIPModel (AltCLIP model)
  • audio-spectrogram-transformerASTModel (Audio Spectrogram Transformer model)
  • autoformerAutoformerModel (Autoformer model)
  • barkBarkModel (Bark model)
  • bartBartModel (BART model)
  • beitBeitModel (BEiT model)
  • bertBertModel (BERT model)
  • bert-generationBertGenerationEncoder (Bert Generation model)
  • big_birdBigBirdModel (BigBird model)
  • bigbird_pegasusBigBirdPegasusModel (BigBird-Pegasus model)
  • biogptBioGptModel (BioGpt model)
  • bitBitModel (BiT model)
  • blenderbotBlenderbotModel (Blenderbot model)
  • blenderbot-smallBlenderbotSmallModel (BlenderbotSmall model)
  • blipBlipModel (BLIP model)
  • blip-2Blip2Model (BLIP-2 model)
  • bloomBloomModel (BLOOM model)
  • bridgetowerBridgeTowerModel (BridgeTower model)
  • brosBrosModel (BROS model)
  • camembertCamembertModel (CamemBERT model)
  • canineCanineModel (CANINE model)
  • chameleonChameleonModel (Chameleon model)
  • chinese_clipChineseCLIPModel (Chinese-CLIP model)
  • chinese_clip_vision_modelChineseCLIPVisionModel (ChineseCLIPVisionModel model)
  • clapClapModel (CLAP model)
  • clipCLIPModel (CLIP model)
  • clip_vision_modelCLIPVisionModel (CLIPVisionModel model)
  • clipsegCLIPSegModel (CLIPSeg model)
  • clvpClvpModelForConditionalGeneration (CLVP model)
  • code_llamaLlamaModel (CodeLlama model)
  • codegenCodeGenModel (CodeGen model)
  • cohereCohereModel (Cohere model)
  • conditional_detrConditionalDetrModel (Conditional DETR model)
  • convbertConvBertModel (ConvBERT model)
  • convnextConvNextModel (ConvNeXT model)
  • convnextv2ConvNextV2Model (ConvNeXTV2 model)
  • cpmantCpmAntModel (CPM-Ant model)
  • ctrlCTRLModel (CTRL model)
  • cvtCvtModel (CvT model)
  • data2vec-audioData2VecAudioModel (Data2VecAudio model)
  • data2vec-textData2VecTextModel (Data2VecText model)
  • data2vec-visionData2VecVisionModel (Data2VecVision model)
  • dbrxDbrxModel (DBRX model)
  • debertaDebertaModel (DeBERTa model)
  • deberta-v2DebertaV2Model (DeBERTa-v2 model)
  • decision_transformerDecisionTransformerModel (Decision Transformer model)
  • deformable_detrDeformableDetrModel (Deformable DETR model)
  • deitDeiTModel (DeiT model)
  • detaDetaModel (DETA model)
  • detrDetrModel (DETR model)
  • dinatDinatModel (DiNAT model)
  • dinov2Dinov2Model (DINOv2 model)
  • distilbertDistilBertModel (DistilBERT model)
  • donut-swinDonutSwinModel (DonutSwin model)
  • dprDPRQuestionEncoder (DPR model)
  • dptDPTModel (DPT model)
  • efficientformerEfficientFormerModel (EfficientFormer model)
  • efficientnetEfficientNetModel (EfficientNet model)
  • electraElectraModel (ELECTRA model)
  • encodecEncodecModel (EnCodec model)
  • ernieErnieModel (ERNIE model)
  • ernie_mErnieMModel (ErnieM model)
  • esmEsmModel (ESM model)
  • falconFalconModel (Falcon model)
  • fastspeech2_conformerFastSpeech2ConformerModel (FastSpeech2Conformer model)
  • flaubertFlaubertModel (FlauBERT model)
  • flavaFlavaModel (FLAVA model)
  • fnetFNetModel (FNet model)
  • focalnetFocalNetModel (FocalNet model)
  • fsmtFSMTModel (FairSeq Machine-Translation model)
  • funnelFunnelModel or FunnelBaseModel (Funnel Transformer model)
  • gemmaGemmaModel (Gemma model)
  • gemma2Gemma2Model (Gemma2 model)
  • gitGitModel (GIT model)
  • glpnGLPNModel (GLPN model)
  • gpt-sw3GPT2Model (GPT-Sw3 model)
  • gpt2GPT2Model (OpenAI GPT-2 model)
  • gpt_bigcodeGPTBigCodeModel (GPTBigCode model)
  • gpt_neoGPTNeoModel (GPT Neo model)
  • gpt_neoxGPTNeoXModel (GPT NeoX model)
  • gpt_neox_japaneseGPTNeoXJapaneseModel (GPT NeoX Japanese model)
  • gptjGPTJModel (GPT-J model)
  • gptsan-japaneseGPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model)
  • graphormerGraphormerModel (Graphormer model)
  • grounding-dinoGroundingDinoModel (Grounding DINO model)
  • groupvitGroupViTModel (GroupViT model)
  • hieraHieraModel (Hiera model)
  • hubertHubertModel (Hubert model)
  • ibertIBertModel (I-BERT model)
  • ideficsIdeficsModel (IDEFICS model)
  • idefics2Idefics2Model (Idefics2 model)
  • imagegptImageGPTModel (ImageGPT model)
  • informerInformerModel (Informer model)
  • jambaJambaModel (Jamba model)
  • jetmoeJetMoeModel (JetMoe model)
  • jukeboxJukeboxModel (Jukebox model)
  • kosmos-2Kosmos2Model (KOSMOS-2 model)
  • layoutlmLayoutLMModel (LayoutLM model)
  • layoutlmv2LayoutLMv2Model (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3Model (LayoutLMv3 model)
  • ledLEDModel (LED model)
  • levitLevitModel (LeViT model)
  • liltLiltModel (LiLT model)
  • llamaLlamaModel (LLaMA model)
  • longformerLongformerModel (Longformer model)
  • longt5LongT5Model (LongT5 model)
  • lukeLukeModel (LUKE model)
  • lxmertLxmertModel (LXMERT model)
  • m2m_100M2M100Model (M2M100 model)
  • mambaMambaModel (Mamba model)
  • mamba2Mamba2Model (mamba2 model)
  • marianMarianModel (Marian model)
  • markuplmMarkupLMModel (MarkupLM model)
  • mask2formerMask2FormerModel (Mask2Former model)
  • maskformerMaskFormerModel (MaskFormer model)
  • maskformer-swinMaskFormerSwinModel (MaskFormerSwin model)
  • mbartMBartModel (mBART model)
  • mctctMCTCTModel (M-CTC-T model)
  • megaMegaModel (MEGA model)
  • megatron-bertMegatronBertModel (Megatron-BERT model)
  • mgp-strMgpstrForSceneTextRecognition (MGP-STR model)
  • mistralMistralModel (Mistral model)
  • mixtralMixtralModel (Mixtral model)
  • mobilebertMobileBertModel (MobileBERT model)
  • mobilenet_v1MobileNetV1Model (MobileNetV1 model)
  • mobilenet_v2MobileNetV2Model (MobileNetV2 model)
  • mobilevitMobileViTModel (MobileViT model)
  • mobilevitv2MobileViTV2Model (MobileViTV2 model)
  • mpnetMPNetModel (MPNet model)
  • mptMptModel (MPT model)
  • mraMraModel (MRA model)
  • mt5MT5Model (MT5 model)
  • musicgenMusicgenModel (MusicGen model)
  • musicgen_melodyMusicgenMelodyModel (MusicGen Melody model)
  • mvpMvpModel (MVP model)
  • natNatModel (NAT model)
  • nemotronNemotronModel (Nemotron model)
  • nezhaNezhaModel (Nezha model)
  • nllb-moeNllbMoeModel (NLLB-MOE model)
  • nystromformerNystromformerModel (Nyströmformer model)
  • olmoOlmoModel (OLMo model)
  • oneformerOneFormerModel (OneFormer model)
  • open-llamaOpenLlamaModel (OpenLlama model)
  • openai-gptOpenAIGPTModel (OpenAI GPT model)
  • optOPTModel (OPT model)
  • owlv2Owlv2Model (OWLv2 model)
  • owlvitOwlViTModel (OWL-ViT model)
  • patchtsmixerPatchTSMixerModel (PatchTSMixer model)
  • patchtstPatchTSTModel (PatchTST model)
  • pegasusPegasusModel (Pegasus model)
  • pegasus_xPegasusXModel (PEGASUS-X model)
  • perceiverPerceiverModel (Perceiver model)
  • persimmonPersimmonModel (Persimmon model)
  • phiPhiModel (Phi model)
  • phi3Phi3Model (Phi3 model)
  • plbartPLBartModel (PLBart model)
  • poolformerPoolFormerModel (PoolFormer model)
  • prophetnetProphetNetModel (ProphetNet model)
  • pvtPvtModel (PVT model)
  • pvt_v2PvtV2Model (PVTv2 model)
  • qdqbertQDQBertModel (QDQBert model)
  • qwen2Qwen2Model (Qwen2 model)
  • qwen2_moeQwen2MoeModel (Qwen2MoE model)
  • recurrent_gemmaRecurrentGemmaModel (RecurrentGemma model)
  • reformerReformerModel (Reformer model)
  • regnetRegNetModel (RegNet model)
  • rembertRemBertModel (RemBERT model)
  • resnetResNetModel (ResNet model)
  • retribertRetriBertModel (RetriBERT model)
  • robertaRobertaModel (RoBERTa model)
  • roberta-prelayernormRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertModel (RoCBert model)
  • roformerRoFormerModel (RoFormer model)
  • rt_detrRTDetrModel (RT-DETR model)
  • rwkvRwkvModel (RWKV model)
  • samSamModel (SAM model)
  • seamless_m4tSeamlessM4TModel (SeamlessM4T model)
  • seamless_m4t_v2SeamlessM4Tv2Model (SeamlessM4Tv2 model)
  • segformerSegformerModel (SegFormer model)
  • seggptSegGptModel (SegGPT model)
  • sewSEWModel (SEW model)
  • sew-dSEWDModel (SEW-D model)
  • siglipSiglipModel (SigLIP model)
  • siglip_vision_modelSiglipVisionModel (SiglipVisionModel model)
  • speech_to_textSpeech2TextModel (Speech2Text model)
  • speecht5SpeechT5Model (SpeechT5 model)
  • splinterSplinterModel (Splinter model)
  • squeezebertSqueezeBertModel (SqueezeBERT model)
  • stablelmStableLmModel (StableLm model)
  • starcoder2Starcoder2Model (Starcoder2 model)
  • swiftformerSwiftFormerModel (SwiftFormer model)
  • swinSwinModel (Swin Transformer model)
  • swin2srSwin2SRModel (Swin2SR model)
  • swinv2Swinv2Model (Swin Transformer V2 model)
  • switch_transformersSwitchTransformersModel (SwitchTransformers model)
  • t5T5Model (T5 model)
  • table-transformerTableTransformerModel (Table Transformer model)
  • tapasTapasModel (TAPAS model)
  • time_series_transformerTimeSeriesTransformerModel (Time Series Transformer model)
  • timesformerTimesformerModel (TimeSformer model)
  • timm_backboneTimmBackbone (TimmBackbone model)
  • trajectory_transformerTrajectoryTransformerModel (Trajectory Transformer model)
  • transfo-xlTransfoXLModel (Transformer-XL model)
  • tvltTvltModel (TVLT model)
  • tvpTvpModel (TVP model)
  • udopUdopModel (UDOP model)
  • umt5UMT5Model (UMT5 model)
  • unispeechUniSpeechModel (UniSpeech model)
  • unispeech-satUniSpeechSatModel (UniSpeechSat model)
  • univnetUnivNetModel (UnivNet model)
  • vanVanModel (VAN model)
  • videomaeVideoMAEModel (VideoMAE model)
  • viltViltModel (ViLT model)
  • vision-text-dual-encoderVisionTextDualEncoderModel (VisionTextDualEncoder model)
  • visual_bertVisualBertModel (VisualBERT model)
  • vitViTModel (ViT model)
  • vit_hybridViTHybridModel (ViT Hybrid model)
  • vit_maeViTMAEModel (ViTMAE model)
  • vit_msnViTMSNModel (ViTMSN model)
  • vitdetVitDetModel (VitDet model)
  • vitsVitsModel (VITS model)
  • vivitVivitModel (ViViT model)
  • wav2vec2Wav2Vec2Model (Wav2Vec2 model)
  • wav2vec2-bertWav2Vec2BertModel (Wav2Vec2-BERT model)
  • wav2vec2-conformerWav2Vec2ConformerModel (Wav2Vec2-Conformer model)
  • wavlmWavLMModel (WavLM model)
  • whisperWhisperModel (Whisper model)
  • xclipXCLIPModel (X-CLIP model)
  • xglmXGLMModel (XGLM model)
  • xlmXLMModel (XLM model)
  • xlm-prophetnetXLMProphetNetModel (XLM-ProphetNet model)
  • xlm-robertaXLMRobertaModel (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaXLModel (XLM-RoBERTa-XL model)
  • xlnetXLNetModel (XLNet model)
  • xmodXmodModel (X-MOD model)
  • yolosYolosModel (YOLOS model)
  • yosoYosoModel (YOSO model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModel

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModel.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModel.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModel.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModel

class transformers.TFAutoModel

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: TFAlbertModel (ALBERT model)
    • BartConfig configuration class: TFBartModel (BART model)
    • BertConfig configuration class: TFBertModel (BERT model)
    • BlenderbotConfig configuration class: TFBlenderbotModel (Blenderbot model)
    • BlenderbotSmallConfig configuration class: TFBlenderbotSmallModel (BlenderbotSmall model)
    • BlipConfig configuration class: TFBlipModel (BLIP model)
    • CLIPConfig configuration class: TFCLIPModel (CLIP model)
    • CTRLConfig configuration class: TFCTRLModel (CTRL model)
    • CamembertConfig configuration class: TFCamembertModel (CamemBERT model)
    • ConvBertConfig configuration class: TFConvBertModel (ConvBERT model)
    • ConvNextConfig configuration class: TFConvNextModel (ConvNeXT model)
    • ConvNextV2Config configuration class: TFConvNextV2Model (ConvNeXTV2 model)
    • CvtConfig configuration class: TFCvtModel (CvT model)
    • DPRConfig configuration class: TFDPRQuestionEncoder (DPR model)
    • Data2VecVisionConfig configuration class: TFData2VecVisionModel (Data2VecVision model)
    • DebertaConfig configuration class: TFDebertaModel (DeBERTa model)
    • DebertaV2Config configuration class: TFDebertaV2Model (DeBERTa-v2 model)
    • DeiTConfig configuration class: TFDeiTModel (DeiT model)
    • DistilBertConfig configuration class: TFDistilBertModel (DistilBERT model)
    • EfficientFormerConfig configuration class: TFEfficientFormerModel (EfficientFormer model)
    • ElectraConfig configuration class: TFElectraModel (ELECTRA model)
    • EsmConfig configuration class: TFEsmModel (ESM model)
    • FlaubertConfig configuration class: TFFlaubertModel (FlauBERT model)
    • FunnelConfig configuration class: TFFunnelModel or TFFunnelBaseModel (Funnel Transformer model)
    • GPT2Config configuration class: TFGPT2Model (OpenAI GPT-2 model)
    • GPTJConfig configuration class: TFGPTJModel (GPT-J model)
    • GroupViTConfig configuration class: TFGroupViTModel (GroupViT model)
    • HubertConfig configuration class: TFHubertModel (Hubert model)
    • IdeficsConfig configuration class: TFIdeficsModel (IDEFICS model)
    • LEDConfig configuration class: TFLEDModel (LED model)
    • LayoutLMConfig configuration class: TFLayoutLMModel (LayoutLM model)
    • LayoutLMv3Config configuration class: TFLayoutLMv3Model (LayoutLMv3 model)
    • LongformerConfig configuration class: TFLongformerModel (Longformer model)
    • LxmertConfig configuration class: TFLxmertModel (LXMERT model)
    • MBartConfig configuration class: TFMBartModel (mBART model)
    • MPNetConfig configuration class: TFMPNetModel (MPNet model)
    • MT5Config configuration class: TFMT5Model (MT5 model)
    • MarianConfig configuration class: TFMarianModel (Marian model)
    • MistralConfig configuration class: TFMistralModel (Mistral model)
    • MobileBertConfig configuration class: TFMobileBertModel (MobileBERT model)
    • MobileViTConfig configuration class: TFMobileViTModel (MobileViT model)
    • OPTConfig configuration class: TFOPTModel (OPT model)
    • OpenAIGPTConfig configuration class: TFOpenAIGPTModel (OpenAI GPT model)
    • PegasusConfig configuration class: TFPegasusModel (Pegasus model)
    • RegNetConfig configuration class: TFRegNetModel (RegNet model)
    • RemBertConfig configuration class: TFRemBertModel (RemBERT model)
    • ResNetConfig configuration class: TFResNetModel (ResNet model)
    • RoFormerConfig configuration class: TFRoFormerModel (RoFormer model)
    • RobertaConfig configuration class: TFRobertaModel (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model)
    • SamConfig configuration class: TFSamModel (SAM model)
    • SegformerConfig configuration class: TFSegformerModel (SegFormer model)
    • Speech2TextConfig configuration class: TFSpeech2TextModel (Speech2Text model)
    • SwiftFormerConfig configuration class: TFSwiftFormerModel (SwiftFormer model)
    • SwinConfig configuration class: TFSwinModel (Swin Transformer model)
    • T5Config configuration class: TFT5Model (T5 model)
    • TapasConfig configuration class: TFTapasModel (TAPAS model)
    • TransfoXLConfig configuration class: TFTransfoXLModel (Transformer-XL model)
    • ViTConfig configuration class: TFViTModel (ViT model)
    • ViTMAEConfig configuration class: TFViTMAEModel (ViTMAE model)
    • VisionTextDualEncoderConfig configuration class: TFVisionTextDualEncoderModel (VisionTextDualEncoder model)
    • Wav2Vec2Config configuration class: TFWav2Vec2Model (Wav2Vec2 model)
    • WhisperConfig configuration class: TFWhisperModel (Whisper model)
    • XGLMConfig configuration class: TFXGLMModel (XGLM model)
    • XLMConfig configuration class: TFXLMModel (XLM model)
    • XLMRobertaConfig configuration class: TFXLMRobertaModel (XLM-RoBERTa model)
    • XLNetConfig configuration class: TFXLNetModel (XLNet model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the base model classes of the library from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModel

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModel.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the base model classes of the library from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertTFAlbertModel (ALBERT model)
  • bartTFBartModel (BART model)
  • bertTFBertModel (BERT model)
  • blenderbotTFBlenderbotModel (Blenderbot model)
  • blenderbot-smallTFBlenderbotSmallModel (BlenderbotSmall model)
  • blipTFBlipModel (BLIP model)
  • camembertTFCamembertModel (CamemBERT model)
  • clipTFCLIPModel (CLIP model)
  • convbertTFConvBertModel (ConvBERT model)
  • convnextTFConvNextModel (ConvNeXT model)
  • convnextv2TFConvNextV2Model (ConvNeXTV2 model)
  • ctrlTFCTRLModel (CTRL model)
  • cvtTFCvtModel (CvT model)
  • data2vec-visionTFData2VecVisionModel (Data2VecVision model)
  • debertaTFDebertaModel (DeBERTa model)
  • deberta-v2TFDebertaV2Model (DeBERTa-v2 model)
  • deitTFDeiTModel (DeiT model)
  • distilbertTFDistilBertModel (DistilBERT model)
  • dprTFDPRQuestionEncoder (DPR model)
  • efficientformerTFEfficientFormerModel (EfficientFormer model)
  • electraTFElectraModel (ELECTRA model)
  • esmTFEsmModel (ESM model)
  • flaubertTFFlaubertModel (FlauBERT model)
  • funnelTFFunnelModel or TFFunnelBaseModel (Funnel Transformer model)
  • gpt-sw3TFGPT2Model (GPT-Sw3 model)
  • gpt2TFGPT2Model (OpenAI GPT-2 model)
  • gptjTFGPTJModel (GPT-J model)
  • groupvitTFGroupViTModel (GroupViT model)
  • hubertTFHubertModel (Hubert model)
  • ideficsTFIdeficsModel (IDEFICS model)
  • layoutlmTFLayoutLMModel (LayoutLM model)
  • layoutlmv3TFLayoutLMv3Model (LayoutLMv3 model)
  • ledTFLEDModel (LED model)
  • longformerTFLongformerModel (Longformer model)
  • lxmertTFLxmertModel (LXMERT model)
  • marianTFMarianModel (Marian model)
  • mbartTFMBartModel (mBART model)
  • mistralTFMistralModel (Mistral model)
  • mobilebertTFMobileBertModel (MobileBERT model)
  • mobilevitTFMobileViTModel (MobileViT model)
  • mpnetTFMPNetModel (MPNet model)
  • mt5TFMT5Model (MT5 model)
  • openai-gptTFOpenAIGPTModel (OpenAI GPT model)
  • optTFOPTModel (OPT model)
  • pegasusTFPegasusModel (Pegasus model)
  • regnetTFRegNetModel (RegNet model)
  • rembertTFRemBertModel (RemBERT model)
  • resnetTFResNetModel (ResNet model)
  • robertaTFRobertaModel (RoBERTa model)
  • roberta-prelayernormTFRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model)
  • roformerTFRoFormerModel (RoFormer model)
  • samTFSamModel (SAM model)
  • segformerTFSegformerModel (SegFormer model)
  • speech_to_textTFSpeech2TextModel (Speech2Text model)
  • swiftformerTFSwiftFormerModel (SwiftFormer model)
  • swinTFSwinModel (Swin Transformer model)
  • t5TFT5Model (T5 model)
  • tapasTFTapasModel (TAPAS model)
  • transfo-xlTFTransfoXLModel (Transformer-XL model)
  • vision-text-dual-encoderTFVisionTextDualEncoderModel (VisionTextDualEncoder model)
  • vitTFViTModel (ViT model)
  • vit_maeTFViTMAEModel (ViTMAE model)
  • wav2vec2TFWav2Vec2Model (Wav2Vec2 model)
  • whisperTFWhisperModel (Whisper model)
  • xglmTFXGLMModel (XGLM model)
  • xlmTFXLMModel (XLM model)
  • xlm-robertaTFXLMRobertaModel (XLM-RoBERTa model)
  • xlnetTFXLNetModel (XLNet model)

Examples:

>>> from transformers import AutoConfig, TFAutoModel

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModel.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModel

class transformers.FlaxAutoModel

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: FlaxAlbertModel (ALBERT model)
    • BartConfig configuration class: FlaxBartModel (BART model)
    • BeitConfig configuration class: FlaxBeitModel (BEiT model)
    • BertConfig configuration class: FlaxBertModel (BERT model)
    • BigBirdConfig configuration class: FlaxBigBirdModel (BigBird model)
    • BlenderbotConfig configuration class: FlaxBlenderbotModel (Blenderbot model)
    • BlenderbotSmallConfig configuration class: FlaxBlenderbotSmallModel (BlenderbotSmall model)
    • BloomConfig configuration class: FlaxBloomModel (BLOOM model)
    • CLIPConfig configuration class: FlaxCLIPModel (CLIP model)
    • DistilBertConfig configuration class: FlaxDistilBertModel (DistilBERT model)
    • ElectraConfig configuration class: FlaxElectraModel (ELECTRA model)
    • GPT2Config configuration class: FlaxGPT2Model (OpenAI GPT-2 model)
    • GPTJConfig configuration class: FlaxGPTJModel (GPT-J model)
    • GPTNeoConfig configuration class: FlaxGPTNeoModel (GPT Neo model)
    • GemmaConfig configuration class: FlaxGemmaModel (Gemma model)
    • LlamaConfig configuration class: FlaxLlamaModel (LLaMA model)
    • LongT5Config configuration class: FlaxLongT5Model (LongT5 model)
    • MBartConfig configuration class: FlaxMBartModel (mBART model)
    • MT5Config configuration class: FlaxMT5Model (MT5 model)
    • MarianConfig configuration class: FlaxMarianModel (Marian model)
    • MistralConfig configuration class: FlaxMistralModel (Mistral model)
    • OPTConfig configuration class: FlaxOPTModel (OPT model)
    • PegasusConfig configuration class: FlaxPegasusModel (Pegasus model)
    • RegNetConfig configuration class: FlaxRegNetModel (RegNet model)
    • ResNetConfig configuration class: FlaxResNetModel (ResNet model)
    • RoFormerConfig configuration class: FlaxRoFormerModel (RoFormer model)
    • RobertaConfig configuration class: FlaxRobertaModel (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model)
    • T5Config configuration class: FlaxT5Model (T5 model)
    • ViTConfig configuration class: FlaxViTModel (ViT model)
    • VisionTextDualEncoderConfig configuration class: FlaxVisionTextDualEncoderModel (VisionTextDualEncoder model)
    • Wav2Vec2Config configuration class: FlaxWav2Vec2Model (Wav2Vec2 model)
    • WhisperConfig configuration class: FlaxWhisperModel (Whisper model)
    • XGLMConfig configuration class: FlaxXGLMModel (XGLM model)
    • XLMRobertaConfig configuration class: FlaxXLMRobertaModel (XLM-RoBERTa model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the base model classes of the library from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModel

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModel.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the base model classes of the library from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertFlaxAlbertModel (ALBERT model)
  • bartFlaxBartModel (BART model)
  • beitFlaxBeitModel (BEiT model)
  • bertFlaxBertModel (BERT model)
  • big_birdFlaxBigBirdModel (BigBird model)
  • blenderbotFlaxBlenderbotModel (Blenderbot model)
  • blenderbot-smallFlaxBlenderbotSmallModel (BlenderbotSmall model)
  • bloomFlaxBloomModel (BLOOM model)
  • clipFlaxCLIPModel (CLIP model)
  • distilbertFlaxDistilBertModel (DistilBERT model)
  • electraFlaxElectraModel (ELECTRA model)
  • gemmaFlaxGemmaModel (Gemma model)
  • gpt-sw3FlaxGPT2Model (GPT-Sw3 model)
  • gpt2FlaxGPT2Model (OpenAI GPT-2 model)
  • gpt_neoFlaxGPTNeoModel (GPT Neo model)
  • gptjFlaxGPTJModel (GPT-J model)
  • llamaFlaxLlamaModel (LLaMA model)
  • longt5FlaxLongT5Model (LongT5 model)
  • marianFlaxMarianModel (Marian model)
  • mbartFlaxMBartModel (mBART model)
  • mistralFlaxMistralModel (Mistral model)
  • mt5FlaxMT5Model (MT5 model)
  • optFlaxOPTModel (OPT model)
  • pegasusFlaxPegasusModel (Pegasus model)
  • regnetFlaxRegNetModel (RegNet model)
  • resnetFlaxResNetModel (ResNet model)
  • robertaFlaxRobertaModel (RoBERTa model)
  • roberta-prelayernormFlaxRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model)
  • roformerFlaxRoFormerModel (RoFormer model)
  • t5FlaxT5Model (T5 model)
  • vision-text-dual-encoderFlaxVisionTextDualEncoderModel (VisionTextDualEncoder model)
  • vitFlaxViTModel (ViT model)
  • wav2vec2FlaxWav2Vec2Model (Wav2Vec2 model)
  • whisperFlaxWhisperModel (Whisper model)
  • xglmFlaxXGLMModel (XGLM model)
  • xlm-robertaFlaxXLMRobertaModel (XLM-RoBERTa model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModel

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModel.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModel.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModel.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

Generic pretraining classes

以下の自動クラスは、事前学習ヘッドを持つモデルをインスタンス化するために利用可能です。

AutoModelForPreTraining

class transformers.AutoModelForPreTraining

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: AlbertForPreTraining (ALBERT model)
    • BartConfig configuration class: BartForConditionalGeneration (BART model)
    • BertConfig configuration class: BertForPreTraining (BERT model)
    • BigBirdConfig configuration class: BigBirdForPreTraining (BigBird model)
    • BloomConfig configuration class: BloomForCausalLM (BLOOM model)
    • CTRLConfig configuration class: CTRLLMHeadModel (CTRL model)
    • CamembertConfig configuration class: CamembertForMaskedLM (CamemBERT model)
    • Data2VecTextConfig configuration class: Data2VecTextForMaskedLM (Data2VecText model)
    • DebertaConfig configuration class: DebertaForMaskedLM (DeBERTa model)
    • DebertaV2Config configuration class: DebertaV2ForMaskedLM (DeBERTa-v2 model)
    • DistilBertConfig configuration class: DistilBertForMaskedLM (DistilBERT model)
    • ElectraConfig configuration class: ElectraForPreTraining (ELECTRA model)
    • ErnieConfig configuration class: ErnieForPreTraining (ERNIE model)
    • FNetConfig configuration class: FNetForPreTraining (FNet model)
    • FSMTConfig configuration class: FSMTForConditionalGeneration (FairSeq Machine-Translation model)
    • FlaubertConfig configuration class: FlaubertWithLMHeadModel (FlauBERT model)
    • FlavaConfig configuration class: FlavaForPreTraining (FLAVA model)
    • FunnelConfig configuration class: FunnelForPreTraining (Funnel Transformer model)
    • GPT2Config configuration class: GPT2LMHeadModel (OpenAI GPT-2 model)
    • GPTBigCodeConfig configuration class: GPTBigCodeForCausalLM (GPTBigCode model)
    • GPTSanJapaneseConfig configuration class: GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model)
    • HieraConfig configuration class: HieraForPreTraining (Hiera model)
    • IBertConfig configuration class: IBertForMaskedLM (I-BERT model)
    • Idefics2Config configuration class: Idefics2ForConditionalGeneration (Idefics2 model)
    • IdeficsConfig configuration class: IdeficsForVisionText2Text (IDEFICS model)
    • LayoutLMConfig configuration class: LayoutLMForMaskedLM (LayoutLM model)
    • LlavaConfig configuration class: LlavaForConditionalGeneration (LLaVa model)
    • LlavaNextConfig configuration class: LlavaNextForConditionalGeneration (LLaVA-NeXT model)
    • LlavaNextVideoConfig configuration class: LlavaNextVideoForConditionalGeneration (LLaVa-NeXT-Video model)
    • LongformerConfig configuration class: LongformerForMaskedLM (Longformer model)
    • LukeConfig configuration class: LukeForMaskedLM (LUKE model)
    • LxmertConfig configuration class: LxmertForPreTraining (LXMERT model)
    • MPNetConfig configuration class: MPNetForMaskedLM (MPNet model)
    • Mamba2Config configuration class: Mamba2ForCausalLM (mamba2 model)
    • MambaConfig configuration class: MambaForCausalLM (Mamba model)
    • MegaConfig configuration class: MegaForMaskedLM (MEGA model)
    • MegatronBertConfig configuration class: MegatronBertForPreTraining (Megatron-BERT model)
    • MobileBertConfig configuration class: MobileBertForPreTraining (MobileBERT model)
    • MptConfig configuration class: MptForCausalLM (MPT model)
    • MraConfig configuration class: MraForMaskedLM (MRA model)
    • MvpConfig configuration class: MvpForConditionalGeneration (MVP model)
    • NezhaConfig configuration class: NezhaForPreTraining (Nezha model)
    • NllbMoeConfig configuration class: NllbMoeForConditionalGeneration (NLLB-MOE model)
    • OpenAIGPTConfig configuration class: OpenAIGPTLMHeadModel (OpenAI GPT model)
    • PaliGemmaConfig configuration class: PaliGemmaForConditionalGeneration (PaliGemma model)
    • RetriBertConfig configuration class: RetriBertModel (RetriBERT model)
    • RoCBertConfig configuration class: RoCBertForPreTraining (RoCBert model)
    • RobertaConfig configuration class: RobertaForMaskedLM (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
    • RwkvConfig configuration class: RwkvForCausalLM (RWKV model)
    • SplinterConfig configuration class: SplinterForPreTraining (Splinter model)
    • SqueezeBertConfig configuration class: SqueezeBertForMaskedLM (SqueezeBERT model)
    • SwitchTransformersConfig configuration class: SwitchTransformersForConditionalGeneration (SwitchTransformers model)
    • T5Config configuration class: T5ForConditionalGeneration (T5 model)
    • TapasConfig configuration class: TapasForMaskedLM (TAPAS model)
    • TransfoXLConfig configuration class: TransfoXLLMHeadModel (Transformer-XL model)
    • TvltConfig configuration class: TvltForPreTraining (TVLT model)
    • UniSpeechConfig configuration class: UniSpeechForPreTraining (UniSpeech model)
    • UniSpeechSatConfig configuration class: UniSpeechSatForPreTraining (UniSpeechSat model)
    • ViTMAEConfig configuration class: ViTMAEForPreTraining (ViTMAE model)
    • VideoLlavaConfig configuration class: VideoLlavaForConditionalGeneration (VideoLlava model)
    • VideoMAEConfig configuration class: VideoMAEForPreTraining (VideoMAE model)
    • VipLlavaConfig configuration class: VipLlavaForConditionalGeneration (VipLlava model)
    • VisualBertConfig configuration class: VisualBertForPreTraining (VisualBERT model)
    • Wav2Vec2Config configuration class: Wav2Vec2ForPreTraining (Wav2Vec2 model)
    • Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForPreTraining (Wav2Vec2-Conformer model)
    • XLMConfig configuration class: XLMWithLMHeadModel (XLM model)
    • XLMRobertaConfig configuration class: XLMRobertaForMaskedLM (XLM-RoBERTa model)
    • XLMRobertaXLConfig configuration class: XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model)
    • XLNetConfig configuration class: XLNetLMHeadModel (XLNet model)
    • XmodConfig configuration class: XmodForMaskedLM (X-MOD model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a pretraining head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForPreTraining

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForPreTraining.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertAlbertForPreTraining (ALBERT model)
  • bartBartForConditionalGeneration (BART model)
  • bertBertForPreTraining (BERT model)
  • big_birdBigBirdForPreTraining (BigBird model)
  • bloomBloomForCausalLM (BLOOM model)
  • camembertCamembertForMaskedLM (CamemBERT model)
  • ctrlCTRLLMHeadModel (CTRL model)
  • data2vec-textData2VecTextForMaskedLM (Data2VecText model)
  • debertaDebertaForMaskedLM (DeBERTa model)
  • deberta-v2DebertaV2ForMaskedLM (DeBERTa-v2 model)
  • distilbertDistilBertForMaskedLM (DistilBERT model)
  • electraElectraForPreTraining (ELECTRA model)
  • ernieErnieForPreTraining (ERNIE model)
  • flaubertFlaubertWithLMHeadModel (FlauBERT model)
  • flavaFlavaForPreTraining (FLAVA model)
  • fnetFNetForPreTraining (FNet model)
  • fsmtFSMTForConditionalGeneration (FairSeq Machine-Translation model)
  • funnelFunnelForPreTraining (Funnel Transformer model)
  • gpt-sw3GPT2LMHeadModel (GPT-Sw3 model)
  • gpt2GPT2LMHeadModel (OpenAI GPT-2 model)
  • gpt_bigcodeGPTBigCodeForCausalLM (GPTBigCode model)
  • gptsan-japaneseGPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model)
  • hieraHieraForPreTraining (Hiera model)
  • ibertIBertForMaskedLM (I-BERT model)
  • ideficsIdeficsForVisionText2Text (IDEFICS model)
  • idefics2Idefics2ForConditionalGeneration (Idefics2 model)
  • layoutlmLayoutLMForMaskedLM (LayoutLM model)
  • llavaLlavaForConditionalGeneration (LLaVa model)
  • llava-next-videoLlavaNextVideoForConditionalGeneration (LLaVa-NeXT-Video model)
  • llava_nextLlavaNextForConditionalGeneration (LLaVA-NeXT model)
  • longformerLongformerForMaskedLM (Longformer model)
  • lukeLukeForMaskedLM (LUKE model)
  • lxmertLxmertForPreTraining (LXMERT model)
  • mambaMambaForCausalLM (Mamba model)
  • mamba2Mamba2ForCausalLM (mamba2 model)
  • megaMegaForMaskedLM (MEGA model)
  • megatron-bertMegatronBertForPreTraining (Megatron-BERT model)
  • mobilebertMobileBertForPreTraining (MobileBERT model)
  • mpnetMPNetForMaskedLM (MPNet model)
  • mptMptForCausalLM (MPT model)
  • mraMraForMaskedLM (MRA model)
  • mvpMvpForConditionalGeneration (MVP model)
  • nezhaNezhaForPreTraining (Nezha model)
  • nllb-moeNllbMoeForConditionalGeneration (NLLB-MOE model)
  • openai-gptOpenAIGPTLMHeadModel (OpenAI GPT model)
  • paligemmaPaliGemmaForConditionalGeneration (PaliGemma model)
  • retribertRetriBertModel (RetriBERT model)
  • robertaRobertaForMaskedLM (RoBERTa model)
  • roberta-prelayernormRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertForPreTraining (RoCBert model)
  • rwkvRwkvForCausalLM (RWKV model)
  • splinterSplinterForPreTraining (Splinter model)
  • squeezebertSqueezeBertForMaskedLM (SqueezeBERT model)
  • switch_transformersSwitchTransformersForConditionalGeneration (SwitchTransformers model)
  • t5T5ForConditionalGeneration (T5 model)
  • tapasTapasForMaskedLM (TAPAS model)
  • transfo-xlTransfoXLLMHeadModel (Transformer-XL model)
  • tvltTvltForPreTraining (TVLT model)
  • unispeechUniSpeechForPreTraining (UniSpeech model)
  • unispeech-satUniSpeechSatForPreTraining (UniSpeechSat model)
  • video_llavaVideoLlavaForConditionalGeneration (VideoLlava model)
  • videomaeVideoMAEForPreTraining (VideoMAE model)
  • vipllavaVipLlavaForConditionalGeneration (VipLlava model)
  • visual_bertVisualBertForPreTraining (VisualBERT model)
  • vit_maeViTMAEForPreTraining (ViTMAE model)
  • wav2vec2Wav2Vec2ForPreTraining (Wav2Vec2 model)
  • wav2vec2-conformerWav2Vec2ConformerForPreTraining (Wav2Vec2-Conformer model)
  • xlmXLMWithLMHeadModel (XLM model)
  • xlm-robertaXLMRobertaForMaskedLM (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model)
  • xlnetXLNetLMHeadModel (XLNet model)
  • xmodXmodForMaskedLM (X-MOD model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForPreTraining

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForPreTraining.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForPreTraining

class transformers.TFAutoModelForPreTraining

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: TFAlbertForPreTraining (ALBERT model)
    • BartConfig configuration class: TFBartForConditionalGeneration (BART model)
    • BertConfig configuration class: TFBertForPreTraining (BERT model)
    • CTRLConfig configuration class: TFCTRLLMHeadModel (CTRL model)
    • CamembertConfig configuration class: TFCamembertForMaskedLM (CamemBERT model)
    • DistilBertConfig configuration class: TFDistilBertForMaskedLM (DistilBERT model)
    • ElectraConfig configuration class: TFElectraForPreTraining (ELECTRA model)
    • FlaubertConfig configuration class: TFFlaubertWithLMHeadModel (FlauBERT model)
    • FunnelConfig configuration class: TFFunnelForPreTraining (Funnel Transformer model)
    • GPT2Config configuration class: TFGPT2LMHeadModel (OpenAI GPT-2 model)
    • IdeficsConfig configuration class: TFIdeficsForVisionText2Text (IDEFICS model)
    • LayoutLMConfig configuration class: TFLayoutLMForMaskedLM (LayoutLM model)
    • LxmertConfig configuration class: TFLxmertForPreTraining (LXMERT model)
    • MPNetConfig configuration class: TFMPNetForMaskedLM (MPNet model)
    • MobileBertConfig configuration class: TFMobileBertForPreTraining (MobileBERT model)
    • OpenAIGPTConfig configuration class: TFOpenAIGPTLMHeadModel (OpenAI GPT model)
    • RobertaConfig configuration class: TFRobertaForMaskedLM (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
    • T5Config configuration class: TFT5ForConditionalGeneration (T5 model)
    • TapasConfig configuration class: TFTapasForMaskedLM (TAPAS model)
    • TransfoXLConfig configuration class: TFTransfoXLLMHeadModel (Transformer-XL model)
    • ViTMAEConfig configuration class: TFViTMAEForPreTraining (ViTMAE model)
    • XLMConfig configuration class: TFXLMWithLMHeadModel (XLM model)
    • XLMRobertaConfig configuration class: TFXLMRobertaForMaskedLM (XLM-RoBERTa model)
    • XLNetConfig configuration class: TFXLNetLMHeadModel (XLNet model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a pretraining head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForPreTraining

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForPreTraining.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertTFAlbertForPreTraining (ALBERT model)
  • bartTFBartForConditionalGeneration (BART model)
  • bertTFBertForPreTraining (BERT model)
  • camembertTFCamembertForMaskedLM (CamemBERT model)
  • ctrlTFCTRLLMHeadModel (CTRL model)
  • distilbertTFDistilBertForMaskedLM (DistilBERT model)
  • electraTFElectraForPreTraining (ELECTRA model)
  • flaubertTFFlaubertWithLMHeadModel (FlauBERT model)
  • funnelTFFunnelForPreTraining (Funnel Transformer model)
  • gpt-sw3TFGPT2LMHeadModel (GPT-Sw3 model)
  • gpt2TFGPT2LMHeadModel (OpenAI GPT-2 model)
  • ideficsTFIdeficsForVisionText2Text (IDEFICS model)
  • layoutlmTFLayoutLMForMaskedLM (LayoutLM model)
  • lxmertTFLxmertForPreTraining (LXMERT model)
  • mobilebertTFMobileBertForPreTraining (MobileBERT model)
  • mpnetTFMPNetForMaskedLM (MPNet model)
  • openai-gptTFOpenAIGPTLMHeadModel (OpenAI GPT model)
  • robertaTFRobertaForMaskedLM (RoBERTa model)
  • roberta-prelayernormTFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
  • t5TFT5ForConditionalGeneration (T5 model)
  • tapasTFTapasForMaskedLM (TAPAS model)
  • transfo-xlTFTransfoXLLMHeadModel (Transformer-XL model)
  • vit_maeTFViTMAEForPreTraining (ViTMAE model)
  • xlmTFXLMWithLMHeadModel (XLM model)
  • xlm-robertaTFXLMRobertaForMaskedLM (XLM-RoBERTa model)
  • xlnetTFXLNetLMHeadModel (XLNet model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForPreTraining

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForPreTraining.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForPreTraining

class transformers.FlaxAutoModelForPreTraining

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: FlaxAlbertForPreTraining (ALBERT model)
    • BartConfig configuration class: FlaxBartForConditionalGeneration (BART model)
    • BertConfig configuration class: FlaxBertForPreTraining (BERT model)
    • BigBirdConfig configuration class: FlaxBigBirdForPreTraining (BigBird model)
    • ElectraConfig configuration class: FlaxElectraForPreTraining (ELECTRA model)
    • LongT5Config configuration class: FlaxLongT5ForConditionalGeneration (LongT5 model)
    • MBartConfig configuration class: FlaxMBartForConditionalGeneration (mBART model)
    • MT5Config configuration class: FlaxMT5ForConditionalGeneration (MT5 model)
    • RoFormerConfig configuration class: FlaxRoFormerForMaskedLM (RoFormer model)
    • RobertaConfig configuration class: FlaxRobertaForMaskedLM (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
    • T5Config configuration class: FlaxT5ForConditionalGeneration (T5 model)
    • Wav2Vec2Config configuration class: FlaxWav2Vec2ForPreTraining (Wav2Vec2 model)
    • WhisperConfig configuration class: FlaxWhisperForConditionalGeneration (Whisper model)
    • XLMRobertaConfig configuration class: FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a pretraining head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForPreTraining

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForPreTraining.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertFlaxAlbertForPreTraining (ALBERT model)
  • bartFlaxBartForConditionalGeneration (BART model)
  • bertFlaxBertForPreTraining (BERT model)
  • big_birdFlaxBigBirdForPreTraining (BigBird model)
  • electraFlaxElectraForPreTraining (ELECTRA model)
  • longt5FlaxLongT5ForConditionalGeneration (LongT5 model)
  • mbartFlaxMBartForConditionalGeneration (mBART model)
  • mt5FlaxMT5ForConditionalGeneration (MT5 model)
  • robertaFlaxRobertaForMaskedLM (RoBERTa model)
  • roberta-prelayernormFlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
  • roformerFlaxRoFormerForMaskedLM (RoFormer model)
  • t5FlaxT5ForConditionalGeneration (T5 model)
  • wav2vec2FlaxWav2Vec2ForPreTraining (Wav2Vec2 model)
  • whisperFlaxWhisperForConditionalGeneration (Whisper model)
  • xlm-robertaFlaxXLMRobertaForMaskedLM (XLM-RoBERTa model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForPreTraining

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForPreTraining.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

Natural Language Processing

以下の自動クラスは、次の自然言語処理タスクに利用可能です。

AutoModelForCausalLM

class transformers.AutoModelForCausalLM

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BartConfig configuration class: BartForCausalLM (BART model)
    • BertConfig configuration class: BertLMHeadModel (BERT model)
    • BertGenerationConfig configuration class: BertGenerationDecoder (Bert Generation model)
    • BigBirdConfig configuration class: BigBirdForCausalLM (BigBird model)
    • BigBirdPegasusConfig configuration class: BigBirdPegasusForCausalLM (BigBird-Pegasus model)
    • BioGptConfig configuration class: BioGptForCausalLM (BioGpt model)
    • BlenderbotConfig configuration class: BlenderbotForCausalLM (Blenderbot model)
    • BlenderbotSmallConfig configuration class: BlenderbotSmallForCausalLM (BlenderbotSmall model)
    • BloomConfig configuration class: BloomForCausalLM (BLOOM model)
    • CTRLConfig configuration class: CTRLLMHeadModel (CTRL model)
    • CamembertConfig configuration class: CamembertForCausalLM (CamemBERT model)
    • CodeGenConfig configuration class: CodeGenForCausalLM (CodeGen model)
    • CohereConfig configuration class: CohereForCausalLM (Cohere model)
    • CpmAntConfig configuration class: CpmAntForCausalLM (CPM-Ant model)
    • Data2VecTextConfig configuration class: Data2VecTextForCausalLM (Data2VecText model)
    • DbrxConfig configuration class: DbrxForCausalLM (DBRX model)
    • ElectraConfig configuration class: ElectraForCausalLM (ELECTRA model)
    • ErnieConfig configuration class: ErnieForCausalLM (ERNIE model)
    • FalconConfig configuration class: FalconForCausalLM (Falcon model)
    • FuyuConfig configuration class: FuyuForCausalLM (Fuyu model)
    • GPT2Config configuration class: GPT2LMHeadModel (OpenAI GPT-2 model)
    • GPTBigCodeConfig configuration class: GPTBigCodeForCausalLM (GPTBigCode model)
    • GPTJConfig configuration class: GPTJForCausalLM (GPT-J model)
    • GPTNeoConfig configuration class: GPTNeoForCausalLM (GPT Neo model)
    • GPTNeoXConfig configuration class: GPTNeoXForCausalLM (GPT NeoX model)
    • GPTNeoXJapaneseConfig configuration class: GPTNeoXJapaneseForCausalLM (GPT NeoX Japanese model)
    • Gemma2Config configuration class: Gemma2ForCausalLM (Gemma2 model)
    • GemmaConfig configuration class: GemmaForCausalLM (Gemma model)
    • GitConfig configuration class: GitForCausalLM (GIT model)
    • JambaConfig configuration class: JambaForCausalLM (Jamba model)
    • JetMoeConfig configuration class: JetMoeForCausalLM (JetMoe model)
    • LlamaConfig configuration class: LlamaForCausalLM (LLaMA model)
    • MBartConfig configuration class: MBartForCausalLM (mBART model)
    • Mamba2Config configuration class: Mamba2ForCausalLM (mamba2 model)
    • MambaConfig configuration class: MambaForCausalLM (Mamba model)
    • MarianConfig configuration class: MarianForCausalLM (Marian model)
    • MegaConfig configuration class: MegaForCausalLM (MEGA model)
    • MegatronBertConfig configuration class: MegatronBertForCausalLM (Megatron-BERT model)
    • MistralConfig configuration class: MistralForCausalLM (Mistral model)
    • MixtralConfig configuration class: MixtralForCausalLM (Mixtral model)
    • MptConfig configuration class: MptForCausalLM (MPT model)
    • MusicgenConfig configuration class: MusicgenForCausalLM (MusicGen model)
    • MusicgenMelodyConfig configuration class: MusicgenMelodyForCausalLM (MusicGen Melody model)
    • MvpConfig configuration class: MvpForCausalLM (MVP model)
    • NemotronConfig configuration class: NemotronForCausalLM (Nemotron model)
    • OPTConfig configuration class: OPTForCausalLM (OPT model)
    • OlmoConfig configuration class: OlmoForCausalLM (OLMo model)
    • OpenAIGPTConfig configuration class: OpenAIGPTLMHeadModel (OpenAI GPT model)
    • OpenLlamaConfig configuration class: OpenLlamaForCausalLM (OpenLlama model)
    • PLBartConfig configuration class: PLBartForCausalLM (PLBart model)
    • PegasusConfig configuration class: PegasusForCausalLM (Pegasus model)
    • PersimmonConfig configuration class: PersimmonForCausalLM (Persimmon model)
    • Phi3Config configuration class: Phi3ForCausalLM (Phi3 model)
    • PhiConfig configuration class: PhiForCausalLM (Phi model)
    • ProphetNetConfig configuration class: ProphetNetForCausalLM (ProphetNet model)
    • QDQBertConfig configuration class: QDQBertLMHeadModel (QDQBert model)
    • Qwen2Config configuration class: Qwen2ForCausalLM (Qwen2 model)
    • Qwen2MoeConfig configuration class: Qwen2MoeForCausalLM (Qwen2MoE model)
    • RecurrentGemmaConfig configuration class: RecurrentGemmaForCausalLM (RecurrentGemma model)
    • ReformerConfig configuration class: ReformerModelWithLMHead (Reformer model)
    • RemBertConfig configuration class: RemBertForCausalLM (RemBERT model)
    • RoCBertConfig configuration class: RoCBertForCausalLM (RoCBert model)
    • RoFormerConfig configuration class: RoFormerForCausalLM (RoFormer model)
    • RobertaConfig configuration class: RobertaForCausalLM (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model)
    • RwkvConfig configuration class: RwkvForCausalLM (RWKV model)
    • Speech2Text2Config configuration class: Speech2Text2ForCausalLM (Speech2Text2 model)
    • StableLmConfig configuration class: StableLmForCausalLM (StableLm model)
    • Starcoder2Config configuration class: Starcoder2ForCausalLM (Starcoder2 model)
    • TrOCRConfig configuration class: TrOCRForCausalLM (TrOCR model)
    • TransfoXLConfig configuration class: TransfoXLLMHeadModel (Transformer-XL model)
    • WhisperConfig configuration class: WhisperForCausalLM (Whisper model)
    • XGLMConfig configuration class: XGLMForCausalLM (XGLM model)
    • XLMConfig configuration class: XLMWithLMHeadModel (XLM model)
    • XLMProphetNetConfig configuration class: XLMProphetNetForCausalLM (XLM-ProphetNet model)
    • XLMRobertaConfig configuration class: XLMRobertaForCausalLM (XLM-RoBERTa model)
    • XLMRobertaXLConfig configuration class: XLMRobertaXLForCausalLM (XLM-RoBERTa-XL model)
    • XLNetConfig configuration class: XLNetLMHeadModel (XLNet model)
    • XmodConfig configuration class: XmodForCausalLM (X-MOD model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForCausalLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForCausalLM.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • bartBartForCausalLM (BART model)
  • bertBertLMHeadModel (BERT model)
  • bert-generationBertGenerationDecoder (Bert Generation model)
  • big_birdBigBirdForCausalLM (BigBird model)
  • bigbird_pegasusBigBirdPegasusForCausalLM (BigBird-Pegasus model)
  • biogptBioGptForCausalLM (BioGpt model)
  • blenderbotBlenderbotForCausalLM (Blenderbot model)
  • blenderbot-smallBlenderbotSmallForCausalLM (BlenderbotSmall model)
  • bloomBloomForCausalLM (BLOOM model)
  • camembertCamembertForCausalLM (CamemBERT model)
  • code_llamaLlamaForCausalLM (CodeLlama model)
  • codegenCodeGenForCausalLM (CodeGen model)
  • cohereCohereForCausalLM (Cohere model)
  • cpmantCpmAntForCausalLM (CPM-Ant model)
  • ctrlCTRLLMHeadModel (CTRL model)
  • data2vec-textData2VecTextForCausalLM (Data2VecText model)
  • dbrxDbrxForCausalLM (DBRX model)
  • electraElectraForCausalLM (ELECTRA model)
  • ernieErnieForCausalLM (ERNIE model)
  • falconFalconForCausalLM (Falcon model)
  • fuyuFuyuForCausalLM (Fuyu model)
  • gemmaGemmaForCausalLM (Gemma model)
  • gemma2Gemma2ForCausalLM (Gemma2 model)
  • gitGitForCausalLM (GIT model)
  • gpt-sw3GPT2LMHeadModel (GPT-Sw3 model)
  • gpt2GPT2LMHeadModel (OpenAI GPT-2 model)
  • gpt_bigcodeGPTBigCodeForCausalLM (GPTBigCode model)
  • gpt_neoGPTNeoForCausalLM (GPT Neo model)
  • gpt_neoxGPTNeoXForCausalLM (GPT NeoX model)
  • gpt_neox_japaneseGPTNeoXJapaneseForCausalLM (GPT NeoX Japanese model)
  • gptjGPTJForCausalLM (GPT-J model)
  • jambaJambaForCausalLM (Jamba model)
  • jetmoeJetMoeForCausalLM (JetMoe model)
  • llamaLlamaForCausalLM (LLaMA model)
  • mambaMambaForCausalLM (Mamba model)
  • mamba2Mamba2ForCausalLM (mamba2 model)
  • marianMarianForCausalLM (Marian model)
  • mbartMBartForCausalLM (mBART model)
  • megaMegaForCausalLM (MEGA model)
  • megatron-bertMegatronBertForCausalLM (Megatron-BERT model)
  • mistralMistralForCausalLM (Mistral model)
  • mixtralMixtralForCausalLM (Mixtral model)
  • mptMptForCausalLM (MPT model)
  • musicgenMusicgenForCausalLM (MusicGen model)
  • musicgen_melodyMusicgenMelodyForCausalLM (MusicGen Melody model)
  • mvpMvpForCausalLM (MVP model)
  • nemotronNemotronForCausalLM (Nemotron model)
  • olmoOlmoForCausalLM (OLMo model)
  • open-llamaOpenLlamaForCausalLM (OpenLlama model)
  • openai-gptOpenAIGPTLMHeadModel (OpenAI GPT model)
  • optOPTForCausalLM (OPT model)
  • pegasusPegasusForCausalLM (Pegasus model)
  • persimmonPersimmonForCausalLM (Persimmon model)
  • phiPhiForCausalLM (Phi model)
  • phi3Phi3ForCausalLM (Phi3 model)
  • plbartPLBartForCausalLM (PLBart model)
  • prophetnetProphetNetForCausalLM (ProphetNet model)
  • qdqbertQDQBertLMHeadModel (QDQBert model)
  • qwen2Qwen2ForCausalLM (Qwen2 model)
  • qwen2_moeQwen2MoeForCausalLM (Qwen2MoE model)
  • recurrent_gemmaRecurrentGemmaForCausalLM (RecurrentGemma model)
  • reformerReformerModelWithLMHead (Reformer model)
  • rembertRemBertForCausalLM (RemBERT model)
  • robertaRobertaForCausalLM (RoBERTa model)
  • roberta-prelayernormRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertForCausalLM (RoCBert model)
  • roformerRoFormerForCausalLM (RoFormer model)
  • rwkvRwkvForCausalLM (RWKV model)
  • speech_to_text_2Speech2Text2ForCausalLM (Speech2Text2 model)
  • stablelmStableLmForCausalLM (StableLm model)
  • starcoder2Starcoder2ForCausalLM (Starcoder2 model)
  • transfo-xlTransfoXLLMHeadModel (Transformer-XL model)
  • trocrTrOCRForCausalLM (TrOCR model)
  • whisperWhisperForCausalLM (Whisper model)
  • xglmXGLMForCausalLM (XGLM model)
  • xlmXLMWithLMHeadModel (XLM model)
  • xlm-prophetnetXLMProphetNetForCausalLM (XLM-ProphetNet model)
  • xlm-robertaXLMRobertaForCausalLM (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaXLForCausalLM (XLM-RoBERTa-XL model)
  • xlnetXLNetLMHeadModel (XLNet model)
  • xmodXmodForCausalLM (X-MOD model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForCausalLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForCausalLM.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForCausalLM

class transformers.TFAutoModelForCausalLM

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BertConfig configuration class: TFBertLMHeadModel (BERT model)
    • CTRLConfig configuration class: TFCTRLLMHeadModel (CTRL model)
    • CamembertConfig configuration class: TFCamembertForCausalLM (CamemBERT model)
    • GPT2Config configuration class: TFGPT2LMHeadModel (OpenAI GPT-2 model)
    • GPTJConfig configuration class: TFGPTJForCausalLM (GPT-J model)
    • MistralConfig configuration class: TFMistralForCausalLM (Mistral model)
    • OPTConfig configuration class: TFOPTForCausalLM (OPT model)
    • OpenAIGPTConfig configuration class: TFOpenAIGPTLMHeadModel (OpenAI GPT model)
    • RemBertConfig configuration class: TFRemBertForCausalLM (RemBERT model)
    • RoFormerConfig configuration class: TFRoFormerForCausalLM (RoFormer model)
    • RobertaConfig configuration class: TFRobertaForCausalLM (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model)
    • TransfoXLConfig configuration class: TFTransfoXLLMHeadModel (Transformer-XL model)
    • XGLMConfig configuration class: TFXGLMForCausalLM (XGLM model)
    • XLMConfig configuration class: TFXLMWithLMHeadModel (XLM model)
    • XLMRobertaConfig configuration class: TFXLMRobertaForCausalLM (XLM-RoBERTa model)
    • XLNetConfig configuration class: TFXLNetLMHeadModel (XLNet model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForCausalLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForCausalLM.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • bertTFBertLMHeadModel (BERT model)
  • camembertTFCamembertForCausalLM (CamemBERT model)
  • ctrlTFCTRLLMHeadModel (CTRL model)
  • gpt-sw3TFGPT2LMHeadModel (GPT-Sw3 model)
  • gpt2TFGPT2LMHeadModel (OpenAI GPT-2 model)
  • gptjTFGPTJForCausalLM (GPT-J model)
  • mistralTFMistralForCausalLM (Mistral model)
  • openai-gptTFOpenAIGPTLMHeadModel (OpenAI GPT model)
  • optTFOPTForCausalLM (OPT model)
  • rembertTFRemBertForCausalLM (RemBERT model)
  • robertaTFRobertaForCausalLM (RoBERTa model)
  • roberta-prelayernormTFRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model)
  • roformerTFRoFormerForCausalLM (RoFormer model)
  • transfo-xlTFTransfoXLLMHeadModel (Transformer-XL model)
  • xglmTFXGLMForCausalLM (XGLM model)
  • xlmTFXLMWithLMHeadModel (XLM model)
  • xlm-robertaTFXLMRobertaForCausalLM (XLM-RoBERTa model)
  • xlnetTFXLNetLMHeadModel (XLNet model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForCausalLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForCausalLM.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForCausalLM

class transformers.FlaxAutoModelForCausalLM

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BartConfig configuration class: FlaxBartForCausalLM (BART model)
    • BertConfig configuration class: FlaxBertForCausalLM (BERT model)
    • BigBirdConfig configuration class: FlaxBigBirdForCausalLM (BigBird model)
    • BloomConfig configuration class: FlaxBloomForCausalLM (BLOOM model)
    • ElectraConfig configuration class: FlaxElectraForCausalLM (ELECTRA model)
    • GPT2Config configuration class: FlaxGPT2LMHeadModel (OpenAI GPT-2 model)
    • GPTJConfig configuration class: FlaxGPTJForCausalLM (GPT-J model)
    • GPTNeoConfig configuration class: FlaxGPTNeoForCausalLM (GPT Neo model)
    • GemmaConfig configuration class: FlaxGemmaForCausalLM (Gemma model)
    • LlamaConfig configuration class: FlaxLlamaForCausalLM (LLaMA model)
    • MistralConfig configuration class: FlaxMistralForCausalLM (Mistral model)
    • OPTConfig configuration class: FlaxOPTForCausalLM (OPT model)
    • RobertaConfig configuration class: FlaxRobertaForCausalLM (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model)
    • XGLMConfig configuration class: FlaxXGLMForCausalLM (XGLM model)
    • XLMRobertaConfig configuration class: FlaxXLMRobertaForCausalLM (XLM-RoBERTa model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForCausalLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForCausalLM.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • bartFlaxBartForCausalLM (BART model)
  • bertFlaxBertForCausalLM (BERT model)
  • big_birdFlaxBigBirdForCausalLM (BigBird model)
  • bloomFlaxBloomForCausalLM (BLOOM model)
  • electraFlaxElectraForCausalLM (ELECTRA model)
  • gemmaFlaxGemmaForCausalLM (Gemma model)
  • gpt-sw3FlaxGPT2LMHeadModel (GPT-Sw3 model)
  • gpt2FlaxGPT2LMHeadModel (OpenAI GPT-2 model)
  • gpt_neoFlaxGPTNeoForCausalLM (GPT Neo model)
  • gptjFlaxGPTJForCausalLM (GPT-J model)
  • llamaFlaxLlamaForCausalLM (LLaMA model)
  • mistralFlaxMistralForCausalLM (Mistral model)
  • optFlaxOPTForCausalLM (OPT model)
  • robertaFlaxRobertaForCausalLM (RoBERTa model)
  • roberta-prelayernormFlaxRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model)
  • xglmFlaxXGLMForCausalLM (XGLM model)
  • xlm-robertaFlaxXLMRobertaForCausalLM (XLM-RoBERTa model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForCausalLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForCausalLM.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForMaskedLM

class transformers.AutoModelForMaskedLM

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: AlbertForMaskedLM (ALBERT model)
    • BartConfig configuration class: BartForConditionalGeneration (BART model)
    • BertConfig configuration class: BertForMaskedLM (BERT model)
    • BigBirdConfig configuration class: BigBirdForMaskedLM (BigBird model)
    • CamembertConfig configuration class: CamembertForMaskedLM (CamemBERT model)
    • ConvBertConfig configuration class: ConvBertForMaskedLM (ConvBERT model)
    • Data2VecTextConfig configuration class: Data2VecTextForMaskedLM (Data2VecText model)
    • DebertaConfig configuration class: DebertaForMaskedLM (DeBERTa model)
    • DebertaV2Config configuration class: DebertaV2ForMaskedLM (DeBERTa-v2 model)
    • DistilBertConfig configuration class: DistilBertForMaskedLM (DistilBERT model)
    • ElectraConfig configuration class: ElectraForMaskedLM (ELECTRA model)
    • ErnieConfig configuration class: ErnieForMaskedLM (ERNIE model)
    • EsmConfig configuration class: EsmForMaskedLM (ESM model)
    • FNetConfig configuration class: FNetForMaskedLM (FNet model)
    • FlaubertConfig configuration class: FlaubertWithLMHeadModel (FlauBERT model)
    • FunnelConfig configuration class: FunnelForMaskedLM (Funnel Transformer model)
    • IBertConfig configuration class: IBertForMaskedLM (I-BERT model)
    • LayoutLMConfig configuration class: LayoutLMForMaskedLM (LayoutLM model)
    • LongformerConfig configuration class: LongformerForMaskedLM (Longformer model)
    • LukeConfig configuration class: LukeForMaskedLM (LUKE model)
    • MBartConfig configuration class: MBartForConditionalGeneration (mBART model)
    • MPNetConfig configuration class: MPNetForMaskedLM (MPNet model)
    • MegaConfig configuration class: MegaForMaskedLM (MEGA model)
    • MegatronBertConfig configuration class: MegatronBertForMaskedLM (Megatron-BERT model)
    • MobileBertConfig configuration class: MobileBertForMaskedLM (MobileBERT model)
    • MraConfig configuration class: MraForMaskedLM (MRA model)
    • MvpConfig configuration class: MvpForConditionalGeneration (MVP model)
    • NezhaConfig configuration class: NezhaForMaskedLM (Nezha model)
    • NystromformerConfig configuration class: NystromformerForMaskedLM (Nyströmformer model)
    • PerceiverConfig configuration class: PerceiverForMaskedLM (Perceiver model)
    • QDQBertConfig configuration class: QDQBertForMaskedLM (QDQBert model)
    • ReformerConfig configuration class: ReformerForMaskedLM (Reformer model)
    • RemBertConfig configuration class: RemBertForMaskedLM (RemBERT model)
    • RoCBertConfig configuration class: RoCBertForMaskedLM (RoCBert model)
    • RoFormerConfig configuration class: RoFormerForMaskedLM (RoFormer model)
    • RobertaConfig configuration class: RobertaForMaskedLM (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
    • SqueezeBertConfig configuration class: SqueezeBertForMaskedLM (SqueezeBERT model)
    • TapasConfig configuration class: TapasForMaskedLM (TAPAS model)
    • Wav2Vec2Config configuration class: Wav2Vec2ForMaskedLM (Wav2Vec2 model)
    • XLMConfig configuration class: XLMWithLMHeadModel (XLM model)
    • XLMRobertaConfig configuration class: XLMRobertaForMaskedLM (XLM-RoBERTa model)
    • XLMRobertaXLConfig configuration class: XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model)
    • XmodConfig configuration class: XmodForMaskedLM (X-MOD model)
    • YosoConfig configuration class: YosoForMaskedLM (YOSO model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForMaskedLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForMaskedLM.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertAlbertForMaskedLM (ALBERT model)
  • bartBartForConditionalGeneration (BART model)
  • bertBertForMaskedLM (BERT model)
  • big_birdBigBirdForMaskedLM (BigBird model)
  • camembertCamembertForMaskedLM (CamemBERT model)
  • convbertConvBertForMaskedLM (ConvBERT model)
  • data2vec-textData2VecTextForMaskedLM (Data2VecText model)
  • debertaDebertaForMaskedLM (DeBERTa model)
  • deberta-v2DebertaV2ForMaskedLM (DeBERTa-v2 model)
  • distilbertDistilBertForMaskedLM (DistilBERT model)
  • electraElectraForMaskedLM (ELECTRA model)
  • ernieErnieForMaskedLM (ERNIE model)
  • esmEsmForMaskedLM (ESM model)
  • flaubertFlaubertWithLMHeadModel (FlauBERT model)
  • fnetFNetForMaskedLM (FNet model)
  • funnelFunnelForMaskedLM (Funnel Transformer model)
  • ibertIBertForMaskedLM (I-BERT model)
  • layoutlmLayoutLMForMaskedLM (LayoutLM model)
  • longformerLongformerForMaskedLM (Longformer model)
  • lukeLukeForMaskedLM (LUKE model)
  • mbartMBartForConditionalGeneration (mBART model)
  • megaMegaForMaskedLM (MEGA model)
  • megatron-bertMegatronBertForMaskedLM (Megatron-BERT model)
  • mobilebertMobileBertForMaskedLM (MobileBERT model)
  • mpnetMPNetForMaskedLM (MPNet model)
  • mraMraForMaskedLM (MRA model)
  • mvpMvpForConditionalGeneration (MVP model)
  • nezhaNezhaForMaskedLM (Nezha model)
  • nystromformerNystromformerForMaskedLM (Nyströmformer model)
  • perceiverPerceiverForMaskedLM (Perceiver model)
  • qdqbertQDQBertForMaskedLM (QDQBert model)
  • reformerReformerForMaskedLM (Reformer model)
  • rembertRemBertForMaskedLM (RemBERT model)
  • robertaRobertaForMaskedLM (RoBERTa model)
  • roberta-prelayernormRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertForMaskedLM (RoCBert model)
  • roformerRoFormerForMaskedLM (RoFormer model)
  • squeezebertSqueezeBertForMaskedLM (SqueezeBERT model)
  • tapasTapasForMaskedLM (TAPAS model)
  • wav2vec2Wav2Vec2ForMaskedLM (Wav2Vec2 model)
  • xlmXLMWithLMHeadModel (XLM model)
  • xlm-robertaXLMRobertaForMaskedLM (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model)
  • xmodXmodForMaskedLM (X-MOD model)
  • yosoYosoForMaskedLM (YOSO model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForMaskedLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForMaskedLM.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForMaskedLM

class transformers.TFAutoModelForMaskedLM

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: TFAlbertForMaskedLM (ALBERT model)
    • BertConfig configuration class: TFBertForMaskedLM (BERT model)
    • CamembertConfig configuration class: TFCamembertForMaskedLM (CamemBERT model)
    • ConvBertConfig configuration class: TFConvBertForMaskedLM (ConvBERT model)
    • DebertaConfig configuration class: TFDebertaForMaskedLM (DeBERTa model)
    • DebertaV2Config configuration class: TFDebertaV2ForMaskedLM (DeBERTa-v2 model)
    • DistilBertConfig configuration class: TFDistilBertForMaskedLM (DistilBERT model)
    • ElectraConfig configuration class: TFElectraForMaskedLM (ELECTRA model)
    • EsmConfig configuration class: TFEsmForMaskedLM (ESM model)
    • FlaubertConfig configuration class: TFFlaubertWithLMHeadModel (FlauBERT model)
    • FunnelConfig configuration class: TFFunnelForMaskedLM (Funnel Transformer model)
    • LayoutLMConfig configuration class: TFLayoutLMForMaskedLM (LayoutLM model)
    • LongformerConfig configuration class: TFLongformerForMaskedLM (Longformer model)
    • MPNetConfig configuration class: TFMPNetForMaskedLM (MPNet model)
    • MobileBertConfig configuration class: TFMobileBertForMaskedLM (MobileBERT model)
    • RemBertConfig configuration class: TFRemBertForMaskedLM (RemBERT model)
    • RoFormerConfig configuration class: TFRoFormerForMaskedLM (RoFormer model)
    • RobertaConfig configuration class: TFRobertaForMaskedLM (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
    • TapasConfig configuration class: TFTapasForMaskedLM (TAPAS model)
    • XLMConfig configuration class: TFXLMWithLMHeadModel (XLM model)
    • XLMRobertaConfig configuration class: TFXLMRobertaForMaskedLM (XLM-RoBERTa model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForMaskedLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForMaskedLM.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertTFAlbertForMaskedLM (ALBERT model)
  • bertTFBertForMaskedLM (BERT model)
  • camembertTFCamembertForMaskedLM (CamemBERT model)
  • convbertTFConvBertForMaskedLM (ConvBERT model)
  • debertaTFDebertaForMaskedLM (DeBERTa model)
  • deberta-v2TFDebertaV2ForMaskedLM (DeBERTa-v2 model)
  • distilbertTFDistilBertForMaskedLM (DistilBERT model)
  • electraTFElectraForMaskedLM (ELECTRA model)
  • esmTFEsmForMaskedLM (ESM model)
  • flaubertTFFlaubertWithLMHeadModel (FlauBERT model)
  • funnelTFFunnelForMaskedLM (Funnel Transformer model)
  • layoutlmTFLayoutLMForMaskedLM (LayoutLM model)
  • longformerTFLongformerForMaskedLM (Longformer model)
  • mobilebertTFMobileBertForMaskedLM (MobileBERT model)
  • mpnetTFMPNetForMaskedLM (MPNet model)
  • rembertTFRemBertForMaskedLM (RemBERT model)
  • robertaTFRobertaForMaskedLM (RoBERTa model)
  • roberta-prelayernormTFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
  • roformerTFRoFormerForMaskedLM (RoFormer model)
  • tapasTFTapasForMaskedLM (TAPAS model)
  • xlmTFXLMWithLMHeadModel (XLM model)
  • xlm-robertaTFXLMRobertaForMaskedLM (XLM-RoBERTa model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForMaskedLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForMaskedLM.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForMaskedLM

class transformers.FlaxAutoModelForMaskedLM

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: FlaxAlbertForMaskedLM (ALBERT model)
    • BartConfig configuration class: FlaxBartForConditionalGeneration (BART model)
    • BertConfig configuration class: FlaxBertForMaskedLM (BERT model)
    • BigBirdConfig configuration class: FlaxBigBirdForMaskedLM (BigBird model)
    • DistilBertConfig configuration class: FlaxDistilBertForMaskedLM (DistilBERT model)
    • ElectraConfig configuration class: FlaxElectraForMaskedLM (ELECTRA model)
    • MBartConfig configuration class: FlaxMBartForConditionalGeneration (mBART model)
    • RoFormerConfig configuration class: FlaxRoFormerForMaskedLM (RoFormer model)
    • RobertaConfig configuration class: FlaxRobertaForMaskedLM (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
    • XLMRobertaConfig configuration class: FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForMaskedLM.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertFlaxAlbertForMaskedLM (ALBERT model)
  • bartFlaxBartForConditionalGeneration (BART model)
  • bertFlaxBertForMaskedLM (BERT model)
  • big_birdFlaxBigBirdForMaskedLM (BigBird model)
  • distilbertFlaxDistilBertForMaskedLM (DistilBERT model)
  • electraFlaxElectraForMaskedLM (ELECTRA model)
  • mbartFlaxMBartForConditionalGeneration (mBART model)
  • robertaFlaxRobertaForMaskedLM (RoBERTa model)
  • roberta-prelayernormFlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
  • roformerFlaxRoFormerForMaskedLM (RoFormer model)
  • xlm-robertaFlaxXLMRobertaForMaskedLM (XLM-RoBERTa model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForMaskedLM.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForMaskGeneration

class transformers.AutoModelForMaskGeneration

< >

( *args **kwargs )

TFAutoModelForMaskGeneration

class transformers.TFAutoModelForMaskGeneration

< >

( *args **kwargs )

AutoModelForSeq2SeqLM

class transformers.AutoModelForSeq2SeqLM

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BartConfig configuration class: BartForConditionalGeneration (BART model)
    • BigBirdPegasusConfig configuration class: BigBirdPegasusForConditionalGeneration (BigBird-Pegasus model)
    • BlenderbotConfig configuration class: BlenderbotForConditionalGeneration (Blenderbot model)
    • BlenderbotSmallConfig configuration class: BlenderbotSmallForConditionalGeneration (BlenderbotSmall model)
    • EncoderDecoderConfig configuration class: EncoderDecoderModel (Encoder decoder model)
    • FSMTConfig configuration class: FSMTForConditionalGeneration (FairSeq Machine-Translation model)
    • GPTSanJapaneseConfig configuration class: GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model)
    • LEDConfig configuration class: LEDForConditionalGeneration (LED model)
    • LongT5Config configuration class: LongT5ForConditionalGeneration (LongT5 model)
    • M2M100Config configuration class: M2M100ForConditionalGeneration (M2M100 model)
    • MBartConfig configuration class: MBartForConditionalGeneration (mBART model)
    • MT5Config configuration class: MT5ForConditionalGeneration (MT5 model)
    • MarianConfig configuration class: MarianMTModel (Marian model)
    • MvpConfig configuration class: MvpForConditionalGeneration (MVP model)
    • NllbMoeConfig configuration class: NllbMoeForConditionalGeneration (NLLB-MOE model)
    • PLBartConfig configuration class: PLBartForConditionalGeneration (PLBart model)
    • PegasusConfig configuration class: PegasusForConditionalGeneration (Pegasus model)
    • PegasusXConfig configuration class: PegasusXForConditionalGeneration (PEGASUS-X model)
    • ProphetNetConfig configuration class: ProphetNetForConditionalGeneration (ProphetNet model)
    • SeamlessM4TConfig configuration class: SeamlessM4TForTextToText (SeamlessM4T model)
    • SeamlessM4Tv2Config configuration class: SeamlessM4Tv2ForTextToText (SeamlessM4Tv2 model)
    • SwitchTransformersConfig configuration class: SwitchTransformersForConditionalGeneration (SwitchTransformers model)
    • T5Config configuration class: T5ForConditionalGeneration (T5 model)
    • UMT5Config configuration class: UMT5ForConditionalGeneration (UMT5 model)
    • XLMProphetNetConfig configuration class: XLMProphetNetForConditionalGeneration (XLM-ProphetNet model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-t5/t5-base")
>>> model = AutoModelForSeq2SeqLM.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • bartBartForConditionalGeneration (BART model)
  • bigbird_pegasusBigBirdPegasusForConditionalGeneration (BigBird-Pegasus model)
  • blenderbotBlenderbotForConditionalGeneration (Blenderbot model)
  • blenderbot-smallBlenderbotSmallForConditionalGeneration (BlenderbotSmall model)
  • encoder-decoderEncoderDecoderModel (Encoder decoder model)
  • fsmtFSMTForConditionalGeneration (FairSeq Machine-Translation model)
  • gptsan-japaneseGPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model)
  • ledLEDForConditionalGeneration (LED model)
  • longt5LongT5ForConditionalGeneration (LongT5 model)
  • m2m_100M2M100ForConditionalGeneration (M2M100 model)
  • marianMarianMTModel (Marian model)
  • mbartMBartForConditionalGeneration (mBART model)
  • mt5MT5ForConditionalGeneration (MT5 model)
  • mvpMvpForConditionalGeneration (MVP model)
  • nllb-moeNllbMoeForConditionalGeneration (NLLB-MOE model)
  • pegasusPegasusForConditionalGeneration (Pegasus model)
  • pegasus_xPegasusXForConditionalGeneration (PEGASUS-X model)
  • plbartPLBartForConditionalGeneration (PLBart model)
  • prophetnetProphetNetForConditionalGeneration (ProphetNet model)
  • seamless_m4tSeamlessM4TForTextToText (SeamlessM4T model)
  • seamless_m4t_v2SeamlessM4Tv2ForTextToText (SeamlessM4Tv2 model)
  • switch_transformersSwitchTransformersForConditionalGeneration (SwitchTransformers model)
  • t5T5ForConditionalGeneration (T5 model)
  • umt5UMT5ForConditionalGeneration (UMT5 model)
  • xlm-prophetnetXLMProphetNetForConditionalGeneration (XLM-ProphetNet model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base")

>>> # Update configuration during loading
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/t5_tf_model_config.json")
>>> model = AutoModelForSeq2SeqLM.from_pretrained(
...     "./tf_model/t5_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForSeq2SeqLM

class transformers.TFAutoModelForSeq2SeqLM

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BartConfig configuration class: TFBartForConditionalGeneration (BART model)
    • BlenderbotConfig configuration class: TFBlenderbotForConditionalGeneration (Blenderbot model)
    • BlenderbotSmallConfig configuration class: TFBlenderbotSmallForConditionalGeneration (BlenderbotSmall model)
    • EncoderDecoderConfig configuration class: TFEncoderDecoderModel (Encoder decoder model)
    • LEDConfig configuration class: TFLEDForConditionalGeneration (LED model)
    • MBartConfig configuration class: TFMBartForConditionalGeneration (mBART model)
    • MT5Config configuration class: TFMT5ForConditionalGeneration (MT5 model)
    • MarianConfig configuration class: TFMarianMTModel (Marian model)
    • PegasusConfig configuration class: TFPegasusForConditionalGeneration (Pegasus model)
    • T5Config configuration class: TFT5ForConditionalGeneration (T5 model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-t5/t5-base")
>>> model = TFAutoModelForSeq2SeqLM.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • bartTFBartForConditionalGeneration (BART model)
  • blenderbotTFBlenderbotForConditionalGeneration (Blenderbot model)
  • blenderbot-smallTFBlenderbotSmallForConditionalGeneration (BlenderbotSmall model)
  • encoder-decoderTFEncoderDecoderModel (Encoder decoder model)
  • ledTFLEDForConditionalGeneration (LED model)
  • marianTFMarianMTModel (Marian model)
  • mbartTFMBartForConditionalGeneration (mBART model)
  • mt5TFMT5ForConditionalGeneration (MT5 model)
  • pegasusTFPegasusForConditionalGeneration (Pegasus model)
  • t5TFT5ForConditionalGeneration (T5 model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base")

>>> # Update configuration during loading
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/t5_pt_model_config.json")
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained(
...     "./pt_model/t5_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForSeq2SeqLM

class transformers.FlaxAutoModelForSeq2SeqLM

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BartConfig configuration class: FlaxBartForConditionalGeneration (BART model)
    • BlenderbotConfig configuration class: FlaxBlenderbotForConditionalGeneration (Blenderbot model)
    • BlenderbotSmallConfig configuration class: FlaxBlenderbotSmallForConditionalGeneration (BlenderbotSmall model)
    • EncoderDecoderConfig configuration class: FlaxEncoderDecoderModel (Encoder decoder model)
    • LongT5Config configuration class: FlaxLongT5ForConditionalGeneration (LongT5 model)
    • MBartConfig configuration class: FlaxMBartForConditionalGeneration (mBART model)
    • MT5Config configuration class: FlaxMT5ForConditionalGeneration (MT5 model)
    • MarianConfig configuration class: FlaxMarianMTModel (Marian model)
    • PegasusConfig configuration class: FlaxPegasusForConditionalGeneration (Pegasus model)
    • T5Config configuration class: FlaxT5ForConditionalGeneration (T5 model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-t5/t5-base")
>>> model = FlaxAutoModelForSeq2SeqLM.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • bartFlaxBartForConditionalGeneration (BART model)
  • blenderbotFlaxBlenderbotForConditionalGeneration (Blenderbot model)
  • blenderbot-smallFlaxBlenderbotSmallForConditionalGeneration (BlenderbotSmall model)
  • encoder-decoderFlaxEncoderDecoderModel (Encoder decoder model)
  • longt5FlaxLongT5ForConditionalGeneration (LongT5 model)
  • marianFlaxMarianMTModel (Marian model)
  • mbartFlaxMBartForConditionalGeneration (mBART model)
  • mt5FlaxMT5ForConditionalGeneration (MT5 model)
  • pegasusFlaxPegasusForConditionalGeneration (Pegasus model)
  • t5FlaxT5ForConditionalGeneration (T5 model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/t5_pt_model_config.json")
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained(
...     "./pt_model/t5_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForSequenceClassification

class transformers.AutoModelForSequenceClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: AlbertForSequenceClassification (ALBERT model)
    • BartConfig configuration class: BartForSequenceClassification (BART model)
    • BertConfig configuration class: BertForSequenceClassification (BERT model)
    • BigBirdConfig configuration class: BigBirdForSequenceClassification (BigBird model)
    • BigBirdPegasusConfig configuration class: BigBirdPegasusForSequenceClassification (BigBird-Pegasus model)
    • BioGptConfig configuration class: BioGptForSequenceClassification (BioGpt model)
    • BloomConfig configuration class: BloomForSequenceClassification (BLOOM model)
    • CTRLConfig configuration class: CTRLForSequenceClassification (CTRL model)
    • CamembertConfig configuration class: CamembertForSequenceClassification (CamemBERT model)
    • CanineConfig configuration class: CanineForSequenceClassification (CANINE model)
    • ConvBertConfig configuration class: ConvBertForSequenceClassification (ConvBERT model)
    • Data2VecTextConfig configuration class: Data2VecTextForSequenceClassification (Data2VecText model)
    • DebertaConfig configuration class: DebertaForSequenceClassification (DeBERTa model)
    • DebertaV2Config configuration class: DebertaV2ForSequenceClassification (DeBERTa-v2 model)
    • DistilBertConfig configuration class: DistilBertForSequenceClassification (DistilBERT model)
    • ElectraConfig configuration class: ElectraForSequenceClassification (ELECTRA model)
    • ErnieConfig configuration class: ErnieForSequenceClassification (ERNIE model)
    • ErnieMConfig configuration class: ErnieMForSequenceClassification (ErnieM model)
    • EsmConfig configuration class: EsmForSequenceClassification (ESM model)
    • FNetConfig configuration class: FNetForSequenceClassification (FNet model)
    • FalconConfig configuration class: FalconForSequenceClassification (Falcon model)
    • FlaubertConfig configuration class: FlaubertForSequenceClassification (FlauBERT model)
    • FunnelConfig configuration class: FunnelForSequenceClassification (Funnel Transformer model)
    • GPT2Config configuration class: GPT2ForSequenceClassification (OpenAI GPT-2 model)
    • GPTBigCodeConfig configuration class: GPTBigCodeForSequenceClassification (GPTBigCode model)
    • GPTJConfig configuration class: GPTJForSequenceClassification (GPT-J model)
    • GPTNeoConfig configuration class: GPTNeoForSequenceClassification (GPT Neo model)
    • GPTNeoXConfig configuration class: GPTNeoXForSequenceClassification (GPT NeoX model)
    • Gemma2Config configuration class: Gemma2ForSequenceClassification (Gemma2 model)
    • GemmaConfig configuration class: GemmaForSequenceClassification (Gemma model)
    • IBertConfig configuration class: IBertForSequenceClassification (I-BERT model)
    • JambaConfig configuration class: JambaForSequenceClassification (Jamba model)
    • JetMoeConfig configuration class: JetMoeForSequenceClassification (JetMoe model)
    • LEDConfig configuration class: LEDForSequenceClassification (LED model)
    • LayoutLMConfig configuration class: LayoutLMForSequenceClassification (LayoutLM model)
    • LayoutLMv2Config configuration class: LayoutLMv2ForSequenceClassification (LayoutLMv2 model)
    • LayoutLMv3Config configuration class: LayoutLMv3ForSequenceClassification (LayoutLMv3 model)
    • LiltConfig configuration class: LiltForSequenceClassification (LiLT model)
    • LlamaConfig configuration class: LlamaForSequenceClassification (LLaMA model)
    • LongformerConfig configuration class: LongformerForSequenceClassification (Longformer model)
    • LukeConfig configuration class: LukeForSequenceClassification (LUKE model)
    • MBartConfig configuration class: MBartForSequenceClassification (mBART model)
    • MPNetConfig configuration class: MPNetForSequenceClassification (MPNet model)
    • MT5Config configuration class: MT5ForSequenceClassification (MT5 model)
    • MarkupLMConfig configuration class: MarkupLMForSequenceClassification (MarkupLM model)
    • MegaConfig configuration class: MegaForSequenceClassification (MEGA model)
    • MegatronBertConfig configuration class: MegatronBertForSequenceClassification (Megatron-BERT model)
    • MistralConfig configuration class: MistralForSequenceClassification (Mistral model)
    • MixtralConfig configuration class: MixtralForSequenceClassification (Mixtral model)
    • MobileBertConfig configuration class: MobileBertForSequenceClassification (MobileBERT model)
    • MptConfig configuration class: MptForSequenceClassification (MPT model)
    • MraConfig configuration class: MraForSequenceClassification (MRA model)
    • MvpConfig configuration class: MvpForSequenceClassification (MVP model)
    • NemotronConfig configuration class: NemotronForSequenceClassification (Nemotron model)
    • NezhaConfig configuration class: NezhaForSequenceClassification (Nezha model)
    • NystromformerConfig configuration class: NystromformerForSequenceClassification (Nyströmformer model)
    • OPTConfig configuration class: OPTForSequenceClassification (OPT model)
    • OpenAIGPTConfig configuration class: OpenAIGPTForSequenceClassification (OpenAI GPT model)
    • OpenLlamaConfig configuration class: OpenLlamaForSequenceClassification (OpenLlama model)
    • PLBartConfig configuration class: PLBartForSequenceClassification (PLBart model)
    • PerceiverConfig configuration class: PerceiverForSequenceClassification (Perceiver model)
    • PersimmonConfig configuration class: PersimmonForSequenceClassification (Persimmon model)
    • Phi3Config configuration class: Phi3ForSequenceClassification (Phi3 model)
    • PhiConfig configuration class: PhiForSequenceClassification (Phi model)
    • QDQBertConfig configuration class: QDQBertForSequenceClassification (QDQBert model)
    • Qwen2Config configuration class: Qwen2ForSequenceClassification (Qwen2 model)
    • Qwen2MoeConfig configuration class: Qwen2MoeForSequenceClassification (Qwen2MoE model)
    • ReformerConfig configuration class: ReformerForSequenceClassification (Reformer model)
    • RemBertConfig configuration class: RemBertForSequenceClassification (RemBERT model)
    • RoCBertConfig configuration class: RoCBertForSequenceClassification (RoCBert model)
    • RoFormerConfig configuration class: RoFormerForSequenceClassification (RoFormer model)
    • RobertaConfig configuration class: RobertaForSequenceClassification (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model)
    • SqueezeBertConfig configuration class: SqueezeBertForSequenceClassification (SqueezeBERT model)
    • StableLmConfig configuration class: StableLmForSequenceClassification (StableLm model)
    • Starcoder2Config configuration class: Starcoder2ForSequenceClassification (Starcoder2 model)
    • T5Config configuration class: T5ForSequenceClassification (T5 model)
    • TapasConfig configuration class: TapasForSequenceClassification (TAPAS model)
    • TransfoXLConfig configuration class: TransfoXLForSequenceClassification (Transformer-XL model)
    • UMT5Config configuration class: UMT5ForSequenceClassification (UMT5 model)
    • XLMConfig configuration class: XLMForSequenceClassification (XLM model)
    • XLMRobertaConfig configuration class: XLMRobertaForSequenceClassification (XLM-RoBERTa model)
    • XLMRobertaXLConfig configuration class: XLMRobertaXLForSequenceClassification (XLM-RoBERTa-XL model)
    • XLNetConfig configuration class: XLNetForSequenceClassification (XLNet model)
    • XmodConfig configuration class: XmodForSequenceClassification (X-MOD model)
    • YosoConfig configuration class: YosoForSequenceClassification (YOSO model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForSequenceClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForSequenceClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertAlbertForSequenceClassification (ALBERT model)
  • bartBartForSequenceClassification (BART model)
  • bertBertForSequenceClassification (BERT model)
  • big_birdBigBirdForSequenceClassification (BigBird model)
  • bigbird_pegasusBigBirdPegasusForSequenceClassification (BigBird-Pegasus model)
  • biogptBioGptForSequenceClassification (BioGpt model)
  • bloomBloomForSequenceClassification (BLOOM model)
  • camembertCamembertForSequenceClassification (CamemBERT model)
  • canineCanineForSequenceClassification (CANINE model)
  • code_llamaLlamaForSequenceClassification (CodeLlama model)
  • convbertConvBertForSequenceClassification (ConvBERT model)
  • ctrlCTRLForSequenceClassification (CTRL model)
  • data2vec-textData2VecTextForSequenceClassification (Data2VecText model)
  • debertaDebertaForSequenceClassification (DeBERTa model)
  • deberta-v2DebertaV2ForSequenceClassification (DeBERTa-v2 model)
  • distilbertDistilBertForSequenceClassification (DistilBERT model)
  • electraElectraForSequenceClassification (ELECTRA model)
  • ernieErnieForSequenceClassification (ERNIE model)
  • ernie_mErnieMForSequenceClassification (ErnieM model)
  • esmEsmForSequenceClassification (ESM model)
  • falconFalconForSequenceClassification (Falcon model)
  • flaubertFlaubertForSequenceClassification (FlauBERT model)
  • fnetFNetForSequenceClassification (FNet model)
  • funnelFunnelForSequenceClassification (Funnel Transformer model)
  • gemmaGemmaForSequenceClassification (Gemma model)
  • gemma2Gemma2ForSequenceClassification (Gemma2 model)
  • gpt-sw3GPT2ForSequenceClassification (GPT-Sw3 model)
  • gpt2GPT2ForSequenceClassification (OpenAI GPT-2 model)
  • gpt_bigcodeGPTBigCodeForSequenceClassification (GPTBigCode model)
  • gpt_neoGPTNeoForSequenceClassification (GPT Neo model)
  • gpt_neoxGPTNeoXForSequenceClassification (GPT NeoX model)
  • gptjGPTJForSequenceClassification (GPT-J model)
  • ibertIBertForSequenceClassification (I-BERT model)
  • jambaJambaForSequenceClassification (Jamba model)
  • jetmoeJetMoeForSequenceClassification (JetMoe model)
  • layoutlmLayoutLMForSequenceClassification (LayoutLM model)
  • layoutlmv2LayoutLMv2ForSequenceClassification (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3ForSequenceClassification (LayoutLMv3 model)
  • ledLEDForSequenceClassification (LED model)
  • liltLiltForSequenceClassification (LiLT model)
  • llamaLlamaForSequenceClassification (LLaMA model)
  • longformerLongformerForSequenceClassification (Longformer model)
  • lukeLukeForSequenceClassification (LUKE model)
  • markuplmMarkupLMForSequenceClassification (MarkupLM model)
  • mbartMBartForSequenceClassification (mBART model)
  • megaMegaForSequenceClassification (MEGA model)
  • megatron-bertMegatronBertForSequenceClassification (Megatron-BERT model)
  • mistralMistralForSequenceClassification (Mistral model)
  • mixtralMixtralForSequenceClassification (Mixtral model)
  • mobilebertMobileBertForSequenceClassification (MobileBERT model)
  • mpnetMPNetForSequenceClassification (MPNet model)
  • mptMptForSequenceClassification (MPT model)
  • mraMraForSequenceClassification (MRA model)
  • mt5MT5ForSequenceClassification (MT5 model)
  • mvpMvpForSequenceClassification (MVP model)
  • nemotronNemotronForSequenceClassification (Nemotron model)
  • nezhaNezhaForSequenceClassification (Nezha model)
  • nystromformerNystromformerForSequenceClassification (Nyströmformer model)
  • open-llamaOpenLlamaForSequenceClassification (OpenLlama model)
  • openai-gptOpenAIGPTForSequenceClassification (OpenAI GPT model)
  • optOPTForSequenceClassification (OPT model)
  • perceiverPerceiverForSequenceClassification (Perceiver model)
  • persimmonPersimmonForSequenceClassification (Persimmon model)
  • phiPhiForSequenceClassification (Phi model)
  • phi3Phi3ForSequenceClassification (Phi3 model)
  • plbartPLBartForSequenceClassification (PLBart model)
  • qdqbertQDQBertForSequenceClassification (QDQBert model)
  • qwen2Qwen2ForSequenceClassification (Qwen2 model)
  • qwen2_moeQwen2MoeForSequenceClassification (Qwen2MoE model)
  • reformerReformerForSequenceClassification (Reformer model)
  • rembertRemBertForSequenceClassification (RemBERT model)
  • robertaRobertaForSequenceClassification (RoBERTa model)
  • roberta-prelayernormRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertForSequenceClassification (RoCBert model)
  • roformerRoFormerForSequenceClassification (RoFormer model)
  • squeezebertSqueezeBertForSequenceClassification (SqueezeBERT model)
  • stablelmStableLmForSequenceClassification (StableLm model)
  • starcoder2Starcoder2ForSequenceClassification (Starcoder2 model)
  • t5T5ForSequenceClassification (T5 model)
  • tapasTapasForSequenceClassification (TAPAS model)
  • transfo-xlTransfoXLForSequenceClassification (Transformer-XL model)
  • umt5UMT5ForSequenceClassification (UMT5 model)
  • xlmXLMForSequenceClassification (XLM model)
  • xlm-robertaXLMRobertaForSequenceClassification (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaXLForSequenceClassification (XLM-RoBERTa-XL model)
  • xlnetXLNetForSequenceClassification (XLNet model)
  • xmodXmodForSequenceClassification (X-MOD model)
  • yosoYosoForSequenceClassification (YOSO model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForSequenceClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForSequenceClassification.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForSequenceClassification

class transformers.TFAutoModelForSequenceClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: TFAlbertForSequenceClassification (ALBERT model)
    • BartConfig configuration class: TFBartForSequenceClassification (BART model)
    • BertConfig configuration class: TFBertForSequenceClassification (BERT model)
    • CTRLConfig configuration class: TFCTRLForSequenceClassification (CTRL model)
    • CamembertConfig configuration class: TFCamembertForSequenceClassification (CamemBERT model)
    • ConvBertConfig configuration class: TFConvBertForSequenceClassification (ConvBERT model)
    • DebertaConfig configuration class: TFDebertaForSequenceClassification (DeBERTa model)
    • DebertaV2Config configuration class: TFDebertaV2ForSequenceClassification (DeBERTa-v2 model)
    • DistilBertConfig configuration class: TFDistilBertForSequenceClassification (DistilBERT model)
    • ElectraConfig configuration class: TFElectraForSequenceClassification (ELECTRA model)
    • EsmConfig configuration class: TFEsmForSequenceClassification (ESM model)
    • FlaubertConfig configuration class: TFFlaubertForSequenceClassification (FlauBERT model)
    • FunnelConfig configuration class: TFFunnelForSequenceClassification (Funnel Transformer model)
    • GPT2Config configuration class: TFGPT2ForSequenceClassification (OpenAI GPT-2 model)
    • GPTJConfig configuration class: TFGPTJForSequenceClassification (GPT-J model)
    • LayoutLMConfig configuration class: TFLayoutLMForSequenceClassification (LayoutLM model)
    • LayoutLMv3Config configuration class: TFLayoutLMv3ForSequenceClassification (LayoutLMv3 model)
    • LongformerConfig configuration class: TFLongformerForSequenceClassification (Longformer model)
    • MPNetConfig configuration class: TFMPNetForSequenceClassification (MPNet model)
    • MistralConfig configuration class: TFMistralForSequenceClassification (Mistral model)
    • MobileBertConfig configuration class: TFMobileBertForSequenceClassification (MobileBERT model)
    • OpenAIGPTConfig configuration class: TFOpenAIGPTForSequenceClassification (OpenAI GPT model)
    • RemBertConfig configuration class: TFRemBertForSequenceClassification (RemBERT model)
    • RoFormerConfig configuration class: TFRoFormerForSequenceClassification (RoFormer model)
    • RobertaConfig configuration class: TFRobertaForSequenceClassification (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model)
    • TapasConfig configuration class: TFTapasForSequenceClassification (TAPAS model)
    • TransfoXLConfig configuration class: TFTransfoXLForSequenceClassification (Transformer-XL model)
    • XLMConfig configuration class: TFXLMForSequenceClassification (XLM model)
    • XLMRobertaConfig configuration class: TFXLMRobertaForSequenceClassification (XLM-RoBERTa model)
    • XLNetConfig configuration class: TFXLNetForSequenceClassification (XLNet model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForSequenceClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForSequenceClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertTFAlbertForSequenceClassification (ALBERT model)
  • bartTFBartForSequenceClassification (BART model)
  • bertTFBertForSequenceClassification (BERT model)
  • camembertTFCamembertForSequenceClassification (CamemBERT model)
  • convbertTFConvBertForSequenceClassification (ConvBERT model)
  • ctrlTFCTRLForSequenceClassification (CTRL model)
  • debertaTFDebertaForSequenceClassification (DeBERTa model)
  • deberta-v2TFDebertaV2ForSequenceClassification (DeBERTa-v2 model)
  • distilbertTFDistilBertForSequenceClassification (DistilBERT model)
  • electraTFElectraForSequenceClassification (ELECTRA model)
  • esmTFEsmForSequenceClassification (ESM model)
  • flaubertTFFlaubertForSequenceClassification (FlauBERT model)
  • funnelTFFunnelForSequenceClassification (Funnel Transformer model)
  • gpt-sw3TFGPT2ForSequenceClassification (GPT-Sw3 model)
  • gpt2TFGPT2ForSequenceClassification (OpenAI GPT-2 model)
  • gptjTFGPTJForSequenceClassification (GPT-J model)
  • layoutlmTFLayoutLMForSequenceClassification (LayoutLM model)
  • layoutlmv3TFLayoutLMv3ForSequenceClassification (LayoutLMv3 model)
  • longformerTFLongformerForSequenceClassification (Longformer model)
  • mistralTFMistralForSequenceClassification (Mistral model)
  • mobilebertTFMobileBertForSequenceClassification (MobileBERT model)
  • mpnetTFMPNetForSequenceClassification (MPNet model)
  • openai-gptTFOpenAIGPTForSequenceClassification (OpenAI GPT model)
  • rembertTFRemBertForSequenceClassification (RemBERT model)
  • robertaTFRobertaForSequenceClassification (RoBERTa model)
  • roberta-prelayernormTFRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model)
  • roformerTFRoFormerForSequenceClassification (RoFormer model)
  • tapasTFTapasForSequenceClassification (TAPAS model)
  • transfo-xlTFTransfoXLForSequenceClassification (Transformer-XL model)
  • xlmTFXLMForSequenceClassification (XLM model)
  • xlm-robertaTFXLMRobertaForSequenceClassification (XLM-RoBERTa model)
  • xlnetTFXLNetForSequenceClassification (XLNet model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForSequenceClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForSequenceClassification.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForSequenceClassification

class transformers.FlaxAutoModelForSequenceClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: FlaxAlbertForSequenceClassification (ALBERT model)
    • BartConfig configuration class: FlaxBartForSequenceClassification (BART model)
    • BertConfig configuration class: FlaxBertForSequenceClassification (BERT model)
    • BigBirdConfig configuration class: FlaxBigBirdForSequenceClassification (BigBird model)
    • DistilBertConfig configuration class: FlaxDistilBertForSequenceClassification (DistilBERT model)
    • ElectraConfig configuration class: FlaxElectraForSequenceClassification (ELECTRA model)
    • MBartConfig configuration class: FlaxMBartForSequenceClassification (mBART model)
    • RoFormerConfig configuration class: FlaxRoFormerForSequenceClassification (RoFormer model)
    • RobertaConfig configuration class: FlaxRobertaForSequenceClassification (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model)
    • XLMRobertaConfig configuration class: FlaxXLMRobertaForSequenceClassification (XLM-RoBERTa model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForSequenceClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertFlaxAlbertForSequenceClassification (ALBERT model)
  • bartFlaxBartForSequenceClassification (BART model)
  • bertFlaxBertForSequenceClassification (BERT model)
  • big_birdFlaxBigBirdForSequenceClassification (BigBird model)
  • distilbertFlaxDistilBertForSequenceClassification (DistilBERT model)
  • electraFlaxElectraForSequenceClassification (ELECTRA model)
  • mbartFlaxMBartForSequenceClassification (mBART model)
  • robertaFlaxRobertaForSequenceClassification (RoBERTa model)
  • roberta-prelayernormFlaxRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model)
  • roformerFlaxRoFormerForSequenceClassification (RoFormer model)
  • xlm-robertaFlaxXLMRobertaForSequenceClassification (XLM-RoBERTa model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForSequenceClassification.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForMultipleChoice

class transformers.AutoModelForMultipleChoice

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: AlbertForMultipleChoice (ALBERT model)
    • BertConfig configuration class: BertForMultipleChoice (BERT model)
    • BigBirdConfig configuration class: BigBirdForMultipleChoice (BigBird model)
    • CamembertConfig configuration class: CamembertForMultipleChoice (CamemBERT model)
    • CanineConfig configuration class: CanineForMultipleChoice (CANINE model)
    • ConvBertConfig configuration class: ConvBertForMultipleChoice (ConvBERT model)
    • Data2VecTextConfig configuration class: Data2VecTextForMultipleChoice (Data2VecText model)
    • DebertaV2Config configuration class: DebertaV2ForMultipleChoice (DeBERTa-v2 model)
    • DistilBertConfig configuration class: DistilBertForMultipleChoice (DistilBERT model)
    • ElectraConfig configuration class: ElectraForMultipleChoice (ELECTRA model)
    • ErnieConfig configuration class: ErnieForMultipleChoice (ERNIE model)
    • ErnieMConfig configuration class: ErnieMForMultipleChoice (ErnieM model)
    • FNetConfig configuration class: FNetForMultipleChoice (FNet model)
    • FlaubertConfig configuration class: FlaubertForMultipleChoice (FlauBERT model)
    • FunnelConfig configuration class: FunnelForMultipleChoice (Funnel Transformer model)
    • IBertConfig configuration class: IBertForMultipleChoice (I-BERT model)
    • LongformerConfig configuration class: LongformerForMultipleChoice (Longformer model)
    • LukeConfig configuration class: LukeForMultipleChoice (LUKE model)
    • MPNetConfig configuration class: MPNetForMultipleChoice (MPNet model)
    • MegaConfig configuration class: MegaForMultipleChoice (MEGA model)
    • MegatronBertConfig configuration class: MegatronBertForMultipleChoice (Megatron-BERT model)
    • MobileBertConfig configuration class: MobileBertForMultipleChoice (MobileBERT model)
    • MraConfig configuration class: MraForMultipleChoice (MRA model)
    • NezhaConfig configuration class: NezhaForMultipleChoice (Nezha model)
    • NystromformerConfig configuration class: NystromformerForMultipleChoice (Nyströmformer model)
    • QDQBertConfig configuration class: QDQBertForMultipleChoice (QDQBert model)
    • RemBertConfig configuration class: RemBertForMultipleChoice (RemBERT model)
    • RoCBertConfig configuration class: RoCBertForMultipleChoice (RoCBert model)
    • RoFormerConfig configuration class: RoFormerForMultipleChoice (RoFormer model)
    • RobertaConfig configuration class: RobertaForMultipleChoice (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model)
    • SqueezeBertConfig configuration class: SqueezeBertForMultipleChoice (SqueezeBERT model)
    • XLMConfig configuration class: XLMForMultipleChoice (XLM model)
    • XLMRobertaConfig configuration class: XLMRobertaForMultipleChoice (XLM-RoBERTa model)
    • XLMRobertaXLConfig configuration class: XLMRobertaXLForMultipleChoice (XLM-RoBERTa-XL model)
    • XLNetConfig configuration class: XLNetForMultipleChoice (XLNet model)
    • XmodConfig configuration class: XmodForMultipleChoice (X-MOD model)
    • YosoConfig configuration class: YosoForMultipleChoice (YOSO model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForMultipleChoice

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForMultipleChoice.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertAlbertForMultipleChoice (ALBERT model)
  • bertBertForMultipleChoice (BERT model)
  • big_birdBigBirdForMultipleChoice (BigBird model)
  • camembertCamembertForMultipleChoice (CamemBERT model)
  • canineCanineForMultipleChoice (CANINE model)
  • convbertConvBertForMultipleChoice (ConvBERT model)
  • data2vec-textData2VecTextForMultipleChoice (Data2VecText model)
  • deberta-v2DebertaV2ForMultipleChoice (DeBERTa-v2 model)
  • distilbertDistilBertForMultipleChoice (DistilBERT model)
  • electraElectraForMultipleChoice (ELECTRA model)
  • ernieErnieForMultipleChoice (ERNIE model)
  • ernie_mErnieMForMultipleChoice (ErnieM model)
  • flaubertFlaubertForMultipleChoice (FlauBERT model)
  • fnetFNetForMultipleChoice (FNet model)
  • funnelFunnelForMultipleChoice (Funnel Transformer model)
  • ibertIBertForMultipleChoice (I-BERT model)
  • longformerLongformerForMultipleChoice (Longformer model)
  • lukeLukeForMultipleChoice (LUKE model)
  • megaMegaForMultipleChoice (MEGA model)
  • megatron-bertMegatronBertForMultipleChoice (Megatron-BERT model)
  • mobilebertMobileBertForMultipleChoice (MobileBERT model)
  • mpnetMPNetForMultipleChoice (MPNet model)
  • mraMraForMultipleChoice (MRA model)
  • nezhaNezhaForMultipleChoice (Nezha model)
  • nystromformerNystromformerForMultipleChoice (Nyströmformer model)
  • qdqbertQDQBertForMultipleChoice (QDQBert model)
  • rembertRemBertForMultipleChoice (RemBERT model)
  • robertaRobertaForMultipleChoice (RoBERTa model)
  • roberta-prelayernormRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertForMultipleChoice (RoCBert model)
  • roformerRoFormerForMultipleChoice (RoFormer model)
  • squeezebertSqueezeBertForMultipleChoice (SqueezeBERT model)
  • xlmXLMForMultipleChoice (XLM model)
  • xlm-robertaXLMRobertaForMultipleChoice (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaXLForMultipleChoice (XLM-RoBERTa-XL model)
  • xlnetXLNetForMultipleChoice (XLNet model)
  • xmodXmodForMultipleChoice (X-MOD model)
  • yosoYosoForMultipleChoice (YOSO model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForMultipleChoice

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForMultipleChoice.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForMultipleChoice

class transformers.TFAutoModelForMultipleChoice

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: TFAlbertForMultipleChoice (ALBERT model)
    • BertConfig configuration class: TFBertForMultipleChoice (BERT model)
    • CamembertConfig configuration class: TFCamembertForMultipleChoice (CamemBERT model)
    • ConvBertConfig configuration class: TFConvBertForMultipleChoice (ConvBERT model)
    • DebertaV2Config configuration class: TFDebertaV2ForMultipleChoice (DeBERTa-v2 model)
    • DistilBertConfig configuration class: TFDistilBertForMultipleChoice (DistilBERT model)
    • ElectraConfig configuration class: TFElectraForMultipleChoice (ELECTRA model)
    • FlaubertConfig configuration class: TFFlaubertForMultipleChoice (FlauBERT model)
    • FunnelConfig configuration class: TFFunnelForMultipleChoice (Funnel Transformer model)
    • LongformerConfig configuration class: TFLongformerForMultipleChoice (Longformer model)
    • MPNetConfig configuration class: TFMPNetForMultipleChoice (MPNet model)
    • MobileBertConfig configuration class: TFMobileBertForMultipleChoice (MobileBERT model)
    • RemBertConfig configuration class: TFRemBertForMultipleChoice (RemBERT model)
    • RoFormerConfig configuration class: TFRoFormerForMultipleChoice (RoFormer model)
    • RobertaConfig configuration class: TFRobertaForMultipleChoice (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model)
    • XLMConfig configuration class: TFXLMForMultipleChoice (XLM model)
    • XLMRobertaConfig configuration class: TFXLMRobertaForMultipleChoice (XLM-RoBERTa model)
    • XLNetConfig configuration class: TFXLNetForMultipleChoice (XLNet model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForMultipleChoice

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForMultipleChoice.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertTFAlbertForMultipleChoice (ALBERT model)
  • bertTFBertForMultipleChoice (BERT model)
  • camembertTFCamembertForMultipleChoice (CamemBERT model)
  • convbertTFConvBertForMultipleChoice (ConvBERT model)
  • deberta-v2TFDebertaV2ForMultipleChoice (DeBERTa-v2 model)
  • distilbertTFDistilBertForMultipleChoice (DistilBERT model)
  • electraTFElectraForMultipleChoice (ELECTRA model)
  • flaubertTFFlaubertForMultipleChoice (FlauBERT model)
  • funnelTFFunnelForMultipleChoice (Funnel Transformer model)
  • longformerTFLongformerForMultipleChoice (Longformer model)
  • mobilebertTFMobileBertForMultipleChoice (MobileBERT model)
  • mpnetTFMPNetForMultipleChoice (MPNet model)
  • rembertTFRemBertForMultipleChoice (RemBERT model)
  • robertaTFRobertaForMultipleChoice (RoBERTa model)
  • roberta-prelayernormTFRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model)
  • roformerTFRoFormerForMultipleChoice (RoFormer model)
  • xlmTFXLMForMultipleChoice (XLM model)
  • xlm-robertaTFXLMRobertaForMultipleChoice (XLM-RoBERTa model)
  • xlnetTFXLNetForMultipleChoice (XLNet model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForMultipleChoice

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForMultipleChoice.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForMultipleChoice

class transformers.FlaxAutoModelForMultipleChoice

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: FlaxAlbertForMultipleChoice (ALBERT model)
    • BertConfig configuration class: FlaxBertForMultipleChoice (BERT model)
    • BigBirdConfig configuration class: FlaxBigBirdForMultipleChoice (BigBird model)
    • DistilBertConfig configuration class: FlaxDistilBertForMultipleChoice (DistilBERT model)
    • ElectraConfig configuration class: FlaxElectraForMultipleChoice (ELECTRA model)
    • RoFormerConfig configuration class: FlaxRoFormerForMultipleChoice (RoFormer model)
    • RobertaConfig configuration class: FlaxRobertaForMultipleChoice (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model)
    • XLMRobertaConfig configuration class: FlaxXLMRobertaForMultipleChoice (XLM-RoBERTa model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForMultipleChoice.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertFlaxAlbertForMultipleChoice (ALBERT model)
  • bertFlaxBertForMultipleChoice (BERT model)
  • big_birdFlaxBigBirdForMultipleChoice (BigBird model)
  • distilbertFlaxDistilBertForMultipleChoice (DistilBERT model)
  • electraFlaxElectraForMultipleChoice (ELECTRA model)
  • robertaFlaxRobertaForMultipleChoice (RoBERTa model)
  • roberta-prelayernormFlaxRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model)
  • roformerFlaxRoFormerForMultipleChoice (RoFormer model)
  • xlm-robertaFlaxXLMRobertaForMultipleChoice (XLM-RoBERTa model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForMultipleChoice.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForNextSentencePrediction

class transformers.AutoModelForNextSentencePrediction

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BertConfig configuration class: BertForNextSentencePrediction (BERT model)
    • ErnieConfig configuration class: ErnieForNextSentencePrediction (ERNIE model)
    • FNetConfig configuration class: FNetForNextSentencePrediction (FNet model)
    • MegatronBertConfig configuration class: MegatronBertForNextSentencePrediction (Megatron-BERT model)
    • MobileBertConfig configuration class: MobileBertForNextSentencePrediction (MobileBERT model)
    • NezhaConfig configuration class: NezhaForNextSentencePrediction (Nezha model)
    • QDQBertConfig configuration class: QDQBertForNextSentencePrediction (QDQBert model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForNextSentencePrediction.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • bertBertForNextSentencePrediction (BERT model)
  • ernieErnieForNextSentencePrediction (ERNIE model)
  • fnetFNetForNextSentencePrediction (FNet model)
  • megatron-bertMegatronBertForNextSentencePrediction (Megatron-BERT model)
  • mobilebertMobileBertForNextSentencePrediction (MobileBERT model)
  • nezhaNezhaForNextSentencePrediction (Nezha model)
  • qdqbertQDQBertForNextSentencePrediction (QDQBert model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForNextSentencePrediction.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForNextSentencePrediction

class transformers.TFAutoModelForNextSentencePrediction

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForNextSentencePrediction

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForNextSentencePrediction.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

Examples:

>>> from transformers import AutoConfig, TFAutoModelForNextSentencePrediction

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForNextSentencePrediction.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForNextSentencePrediction

class transformers.FlaxAutoModelForNextSentencePrediction

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForNextSentencePrediction.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForTokenClassification

class transformers.AutoModelForTokenClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: AlbertForTokenClassification (ALBERT model)
    • BertConfig configuration class: BertForTokenClassification (BERT model)
    • BigBirdConfig configuration class: BigBirdForTokenClassification (BigBird model)
    • BioGptConfig configuration class: BioGptForTokenClassification (BioGpt model)
    • BloomConfig configuration class: BloomForTokenClassification (BLOOM model)
    • BrosConfig configuration class: BrosForTokenClassification (BROS model)
    • CamembertConfig configuration class: CamembertForTokenClassification (CamemBERT model)
    • CanineConfig configuration class: CanineForTokenClassification (CANINE model)
    • ConvBertConfig configuration class: ConvBertForTokenClassification (ConvBERT model)
    • Data2VecTextConfig configuration class: Data2VecTextForTokenClassification (Data2VecText model)
    • DebertaConfig configuration class: DebertaForTokenClassification (DeBERTa model)
    • DebertaV2Config configuration class: DebertaV2ForTokenClassification (DeBERTa-v2 model)
    • DistilBertConfig configuration class: DistilBertForTokenClassification (DistilBERT model)
    • ElectraConfig configuration class: ElectraForTokenClassification (ELECTRA model)
    • ErnieConfig configuration class: ErnieForTokenClassification (ERNIE model)
    • ErnieMConfig configuration class: ErnieMForTokenClassification (ErnieM model)
    • EsmConfig configuration class: EsmForTokenClassification (ESM model)
    • FNetConfig configuration class: FNetForTokenClassification (FNet model)
    • FalconConfig configuration class: FalconForTokenClassification (Falcon model)
    • FlaubertConfig configuration class: FlaubertForTokenClassification (FlauBERT model)
    • FunnelConfig configuration class: FunnelForTokenClassification (Funnel Transformer model)
    • GPT2Config configuration class: GPT2ForTokenClassification (OpenAI GPT-2 model)
    • GPTBigCodeConfig configuration class: GPTBigCodeForTokenClassification (GPTBigCode model)
    • GPTNeoConfig configuration class: GPTNeoForTokenClassification (GPT Neo model)
    • GPTNeoXConfig configuration class: GPTNeoXForTokenClassification (GPT NeoX model)
    • Gemma2Config configuration class: Gemma2ForTokenClassification (Gemma2 model)
    • GemmaConfig configuration class: GemmaForTokenClassification (Gemma model)
    • IBertConfig configuration class: IBertForTokenClassification (I-BERT model)
    • LayoutLMConfig configuration class: LayoutLMForTokenClassification (LayoutLM model)
    • LayoutLMv2Config configuration class: LayoutLMv2ForTokenClassification (LayoutLMv2 model)
    • LayoutLMv3Config configuration class: LayoutLMv3ForTokenClassification (LayoutLMv3 model)
    • LiltConfig configuration class: LiltForTokenClassification (LiLT model)
    • LlamaConfig configuration class: LlamaForTokenClassification (LLaMA model)
    • LongformerConfig configuration class: LongformerForTokenClassification (Longformer model)
    • LukeConfig configuration class: LukeForTokenClassification (LUKE model)
    • MPNetConfig configuration class: MPNetForTokenClassification (MPNet model)
    • MT5Config configuration class: MT5ForTokenClassification (MT5 model)
    • MarkupLMConfig configuration class: MarkupLMForTokenClassification (MarkupLM model)
    • MegaConfig configuration class: MegaForTokenClassification (MEGA model)
    • MegatronBertConfig configuration class: MegatronBertForTokenClassification (Megatron-BERT model)
    • MistralConfig configuration class: MistralForTokenClassification (Mistral model)
    • MixtralConfig configuration class: MixtralForTokenClassification (Mixtral model)
    • MobileBertConfig configuration class: MobileBertForTokenClassification (MobileBERT model)
    • MptConfig configuration class: MptForTokenClassification (MPT model)
    • MraConfig configuration class: MraForTokenClassification (MRA model)
    • NemotronConfig configuration class: NemotronForTokenClassification (Nemotron model)
    • NezhaConfig configuration class: NezhaForTokenClassification (Nezha model)
    • NystromformerConfig configuration class: NystromformerForTokenClassification (Nyströmformer model)
    • PersimmonConfig configuration class: PersimmonForTokenClassification (Persimmon model)
    • Phi3Config configuration class: Phi3ForTokenClassification (Phi3 model)
    • PhiConfig configuration class: PhiForTokenClassification (Phi model)
    • QDQBertConfig configuration class: QDQBertForTokenClassification (QDQBert model)
    • Qwen2Config configuration class: Qwen2ForTokenClassification (Qwen2 model)
    • Qwen2MoeConfig configuration class: Qwen2MoeForTokenClassification (Qwen2MoE model)
    • RemBertConfig configuration class: RemBertForTokenClassification (RemBERT model)
    • RoCBertConfig configuration class: RoCBertForTokenClassification (RoCBert model)
    • RoFormerConfig configuration class: RoFormerForTokenClassification (RoFormer model)
    • RobertaConfig configuration class: RobertaForTokenClassification (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model)
    • SqueezeBertConfig configuration class: SqueezeBertForTokenClassification (SqueezeBERT model)
    • StableLmConfig configuration class: StableLmForTokenClassification (StableLm model)
    • Starcoder2Config configuration class: Starcoder2ForTokenClassification (Starcoder2 model)
    • T5Config configuration class: T5ForTokenClassification (T5 model)
    • UMT5Config configuration class: UMT5ForTokenClassification (UMT5 model)
    • XLMConfig configuration class: XLMForTokenClassification (XLM model)
    • XLMRobertaConfig configuration class: XLMRobertaForTokenClassification (XLM-RoBERTa model)
    • XLMRobertaXLConfig configuration class: XLMRobertaXLForTokenClassification (XLM-RoBERTa-XL model)
    • XLNetConfig configuration class: XLNetForTokenClassification (XLNet model)
    • XmodConfig configuration class: XmodForTokenClassification (X-MOD model)
    • YosoConfig configuration class: YosoForTokenClassification (YOSO model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a token classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForTokenClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForTokenClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertAlbertForTokenClassification (ALBERT model)
  • bertBertForTokenClassification (BERT model)
  • big_birdBigBirdForTokenClassification (BigBird model)
  • biogptBioGptForTokenClassification (BioGpt model)
  • bloomBloomForTokenClassification (BLOOM model)
  • brosBrosForTokenClassification (BROS model)
  • camembertCamembertForTokenClassification (CamemBERT model)
  • canineCanineForTokenClassification (CANINE model)
  • convbertConvBertForTokenClassification (ConvBERT model)
  • data2vec-textData2VecTextForTokenClassification (Data2VecText model)
  • debertaDebertaForTokenClassification (DeBERTa model)
  • deberta-v2DebertaV2ForTokenClassification (DeBERTa-v2 model)
  • distilbertDistilBertForTokenClassification (DistilBERT model)
  • electraElectraForTokenClassification (ELECTRA model)
  • ernieErnieForTokenClassification (ERNIE model)
  • ernie_mErnieMForTokenClassification (ErnieM model)
  • esmEsmForTokenClassification (ESM model)
  • falconFalconForTokenClassification (Falcon model)
  • flaubertFlaubertForTokenClassification (FlauBERT model)
  • fnetFNetForTokenClassification (FNet model)
  • funnelFunnelForTokenClassification (Funnel Transformer model)
  • gemmaGemmaForTokenClassification (Gemma model)
  • gemma2Gemma2ForTokenClassification (Gemma2 model)
  • gpt-sw3GPT2ForTokenClassification (GPT-Sw3 model)
  • gpt2GPT2ForTokenClassification (OpenAI GPT-2 model)
  • gpt_bigcodeGPTBigCodeForTokenClassification (GPTBigCode model)
  • gpt_neoGPTNeoForTokenClassification (GPT Neo model)
  • gpt_neoxGPTNeoXForTokenClassification (GPT NeoX model)
  • ibertIBertForTokenClassification (I-BERT model)
  • layoutlmLayoutLMForTokenClassification (LayoutLM model)
  • layoutlmv2LayoutLMv2ForTokenClassification (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3ForTokenClassification (LayoutLMv3 model)
  • liltLiltForTokenClassification (LiLT model)
  • llamaLlamaForTokenClassification (LLaMA model)
  • longformerLongformerForTokenClassification (Longformer model)
  • lukeLukeForTokenClassification (LUKE model)
  • markuplmMarkupLMForTokenClassification (MarkupLM model)
  • megaMegaForTokenClassification (MEGA model)
  • megatron-bertMegatronBertForTokenClassification (Megatron-BERT model)
  • mistralMistralForTokenClassification (Mistral model)
  • mixtralMixtralForTokenClassification (Mixtral model)
  • mobilebertMobileBertForTokenClassification (MobileBERT model)
  • mpnetMPNetForTokenClassification (MPNet model)
  • mptMptForTokenClassification (MPT model)
  • mraMraForTokenClassification (MRA model)
  • mt5MT5ForTokenClassification (MT5 model)
  • nemotronNemotronForTokenClassification (Nemotron model)
  • nezhaNezhaForTokenClassification (Nezha model)
  • nystromformerNystromformerForTokenClassification (Nyströmformer model)
  • persimmonPersimmonForTokenClassification (Persimmon model)
  • phiPhiForTokenClassification (Phi model)
  • phi3Phi3ForTokenClassification (Phi3 model)
  • qdqbertQDQBertForTokenClassification (QDQBert model)
  • qwen2Qwen2ForTokenClassification (Qwen2 model)
  • qwen2_moeQwen2MoeForTokenClassification (Qwen2MoE model)
  • rembertRemBertForTokenClassification (RemBERT model)
  • robertaRobertaForTokenClassification (RoBERTa model)
  • roberta-prelayernormRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertForTokenClassification (RoCBert model)
  • roformerRoFormerForTokenClassification (RoFormer model)
  • squeezebertSqueezeBertForTokenClassification (SqueezeBERT model)
  • stablelmStableLmForTokenClassification (StableLm model)
  • starcoder2Starcoder2ForTokenClassification (Starcoder2 model)
  • t5T5ForTokenClassification (T5 model)
  • umt5UMT5ForTokenClassification (UMT5 model)
  • xlmXLMForTokenClassification (XLM model)
  • xlm-robertaXLMRobertaForTokenClassification (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaXLForTokenClassification (XLM-RoBERTa-XL model)
  • xlnetXLNetForTokenClassification (XLNet model)
  • xmodXmodForTokenClassification (X-MOD model)
  • yosoYosoForTokenClassification (YOSO model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForTokenClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForTokenClassification.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForTokenClassification

class transformers.TFAutoModelForTokenClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: TFAlbertForTokenClassification (ALBERT model)
    • BertConfig configuration class: TFBertForTokenClassification (BERT model)
    • CamembertConfig configuration class: TFCamembertForTokenClassification (CamemBERT model)
    • ConvBertConfig configuration class: TFConvBertForTokenClassification (ConvBERT model)
    • DebertaConfig configuration class: TFDebertaForTokenClassification (DeBERTa model)
    • DebertaV2Config configuration class: TFDebertaV2ForTokenClassification (DeBERTa-v2 model)
    • DistilBertConfig configuration class: TFDistilBertForTokenClassification (DistilBERT model)
    • ElectraConfig configuration class: TFElectraForTokenClassification (ELECTRA model)
    • EsmConfig configuration class: TFEsmForTokenClassification (ESM model)
    • FlaubertConfig configuration class: TFFlaubertForTokenClassification (FlauBERT model)
    • FunnelConfig configuration class: TFFunnelForTokenClassification (Funnel Transformer model)
    • LayoutLMConfig configuration class: TFLayoutLMForTokenClassification (LayoutLM model)
    • LayoutLMv3Config configuration class: TFLayoutLMv3ForTokenClassification (LayoutLMv3 model)
    • LongformerConfig configuration class: TFLongformerForTokenClassification (Longformer model)
    • MPNetConfig configuration class: TFMPNetForTokenClassification (MPNet model)
    • MobileBertConfig configuration class: TFMobileBertForTokenClassification (MobileBERT model)
    • RemBertConfig configuration class: TFRemBertForTokenClassification (RemBERT model)
    • RoFormerConfig configuration class: TFRoFormerForTokenClassification (RoFormer model)
    • RobertaConfig configuration class: TFRobertaForTokenClassification (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model)
    • XLMConfig configuration class: TFXLMForTokenClassification (XLM model)
    • XLMRobertaConfig configuration class: TFXLMRobertaForTokenClassification (XLM-RoBERTa model)
    • XLNetConfig configuration class: TFXLNetForTokenClassification (XLNet model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a token classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForTokenClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForTokenClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertTFAlbertForTokenClassification (ALBERT model)
  • bertTFBertForTokenClassification (BERT model)
  • camembertTFCamembertForTokenClassification (CamemBERT model)
  • convbertTFConvBertForTokenClassification (ConvBERT model)
  • debertaTFDebertaForTokenClassification (DeBERTa model)
  • deberta-v2TFDebertaV2ForTokenClassification (DeBERTa-v2 model)
  • distilbertTFDistilBertForTokenClassification (DistilBERT model)
  • electraTFElectraForTokenClassification (ELECTRA model)
  • esmTFEsmForTokenClassification (ESM model)
  • flaubertTFFlaubertForTokenClassification (FlauBERT model)
  • funnelTFFunnelForTokenClassification (Funnel Transformer model)
  • layoutlmTFLayoutLMForTokenClassification (LayoutLM model)
  • layoutlmv3TFLayoutLMv3ForTokenClassification (LayoutLMv3 model)
  • longformerTFLongformerForTokenClassification (Longformer model)
  • mobilebertTFMobileBertForTokenClassification (MobileBERT model)
  • mpnetTFMPNetForTokenClassification (MPNet model)
  • rembertTFRemBertForTokenClassification (RemBERT model)
  • robertaTFRobertaForTokenClassification (RoBERTa model)
  • roberta-prelayernormTFRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model)
  • roformerTFRoFormerForTokenClassification (RoFormer model)
  • xlmTFXLMForTokenClassification (XLM model)
  • xlm-robertaTFXLMRobertaForTokenClassification (XLM-RoBERTa model)
  • xlnetTFXLNetForTokenClassification (XLNet model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForTokenClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForTokenClassification.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForTokenClassification

class transformers.FlaxAutoModelForTokenClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: FlaxAlbertForTokenClassification (ALBERT model)
    • BertConfig configuration class: FlaxBertForTokenClassification (BERT model)
    • BigBirdConfig configuration class: FlaxBigBirdForTokenClassification (BigBird model)
    • DistilBertConfig configuration class: FlaxDistilBertForTokenClassification (DistilBERT model)
    • ElectraConfig configuration class: FlaxElectraForTokenClassification (ELECTRA model)
    • RoFormerConfig configuration class: FlaxRoFormerForTokenClassification (RoFormer model)
    • RobertaConfig configuration class: FlaxRobertaForTokenClassification (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model)
    • XLMRobertaConfig configuration class: FlaxXLMRobertaForTokenClassification (XLM-RoBERTa model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a token classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForTokenClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertFlaxAlbertForTokenClassification (ALBERT model)
  • bertFlaxBertForTokenClassification (BERT model)
  • big_birdFlaxBigBirdForTokenClassification (BigBird model)
  • distilbertFlaxDistilBertForTokenClassification (DistilBERT model)
  • electraFlaxElectraForTokenClassification (ELECTRA model)
  • robertaFlaxRobertaForTokenClassification (RoBERTa model)
  • roberta-prelayernormFlaxRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model)
  • roformerFlaxRoFormerForTokenClassification (RoFormer model)
  • xlm-robertaFlaxXLMRobertaForTokenClassification (XLM-RoBERTa model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForTokenClassification.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForQuestionAnswering

class transformers.AutoModelForQuestionAnswering

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: AlbertForQuestionAnswering (ALBERT model)
    • BartConfig configuration class: BartForQuestionAnswering (BART model)
    • BertConfig configuration class: BertForQuestionAnswering (BERT model)
    • BigBirdConfig configuration class: BigBirdForQuestionAnswering (BigBird model)
    • BigBirdPegasusConfig configuration class: BigBirdPegasusForQuestionAnswering (BigBird-Pegasus model)
    • BloomConfig configuration class: BloomForQuestionAnswering (BLOOM model)
    • CamembertConfig configuration class: CamembertForQuestionAnswering (CamemBERT model)
    • CanineConfig configuration class: CanineForQuestionAnswering (CANINE model)
    • ConvBertConfig configuration class: ConvBertForQuestionAnswering (ConvBERT model)
    • Data2VecTextConfig configuration class: Data2VecTextForQuestionAnswering (Data2VecText model)
    • DebertaConfig configuration class: DebertaForQuestionAnswering (DeBERTa model)
    • DebertaV2Config configuration class: DebertaV2ForQuestionAnswering (DeBERTa-v2 model)
    • DistilBertConfig configuration class: DistilBertForQuestionAnswering (DistilBERT model)
    • ElectraConfig configuration class: ElectraForQuestionAnswering (ELECTRA model)
    • ErnieConfig configuration class: ErnieForQuestionAnswering (ERNIE model)
    • ErnieMConfig configuration class: ErnieMForQuestionAnswering (ErnieM model)
    • FNetConfig configuration class: FNetForQuestionAnswering (FNet model)
    • FalconConfig configuration class: FalconForQuestionAnswering (Falcon model)
    • FlaubertConfig configuration class: FlaubertForQuestionAnsweringSimple (FlauBERT model)
    • FunnelConfig configuration class: FunnelForQuestionAnswering (Funnel Transformer model)
    • GPT2Config configuration class: GPT2ForQuestionAnswering (OpenAI GPT-2 model)
    • GPTJConfig configuration class: GPTJForQuestionAnswering (GPT-J model)
    • GPTNeoConfig configuration class: GPTNeoForQuestionAnswering (GPT Neo model)
    • GPTNeoXConfig configuration class: GPTNeoXForQuestionAnswering (GPT NeoX model)
    • IBertConfig configuration class: IBertForQuestionAnswering (I-BERT model)
    • LEDConfig configuration class: LEDForQuestionAnswering (LED model)
    • LayoutLMv2Config configuration class: LayoutLMv2ForQuestionAnswering (LayoutLMv2 model)
    • LayoutLMv3Config configuration class: LayoutLMv3ForQuestionAnswering (LayoutLMv3 model)
    • LiltConfig configuration class: LiltForQuestionAnswering (LiLT model)
    • LlamaConfig configuration class: LlamaForQuestionAnswering (LLaMA model)
    • LongformerConfig configuration class: LongformerForQuestionAnswering (Longformer model)
    • LukeConfig configuration class: LukeForQuestionAnswering (LUKE model)
    • LxmertConfig configuration class: LxmertForQuestionAnswering (LXMERT model)
    • MBartConfig configuration class: MBartForQuestionAnswering (mBART model)
    • MPNetConfig configuration class: MPNetForQuestionAnswering (MPNet model)
    • MT5Config configuration class: MT5ForQuestionAnswering (MT5 model)
    • MarkupLMConfig configuration class: MarkupLMForQuestionAnswering (MarkupLM model)
    • MegaConfig configuration class: MegaForQuestionAnswering (MEGA model)
    • MegatronBertConfig configuration class: MegatronBertForQuestionAnswering (Megatron-BERT model)
    • MobileBertConfig configuration class: MobileBertForQuestionAnswering (MobileBERT model)
    • MptConfig configuration class: MptForQuestionAnswering (MPT model)
    • MraConfig configuration class: MraForQuestionAnswering (MRA model)
    • MvpConfig configuration class: MvpForQuestionAnswering (MVP model)
    • NemotronConfig configuration class: NemotronForQuestionAnswering (Nemotron model)
    • NezhaConfig configuration class: NezhaForQuestionAnswering (Nezha model)
    • NystromformerConfig configuration class: NystromformerForQuestionAnswering (Nyströmformer model)
    • OPTConfig configuration class: OPTForQuestionAnswering (OPT model)
    • QDQBertConfig configuration class: QDQBertForQuestionAnswering (QDQBert model)
    • ReformerConfig configuration class: ReformerForQuestionAnswering (Reformer model)
    • RemBertConfig configuration class: RemBertForQuestionAnswering (RemBERT model)
    • RoCBertConfig configuration class: RoCBertForQuestionAnswering (RoCBert model)
    • RoFormerConfig configuration class: RoFormerForQuestionAnswering (RoFormer model)
    • RobertaConfig configuration class: RobertaForQuestionAnswering (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model)
    • SplinterConfig configuration class: SplinterForQuestionAnswering (Splinter model)
    • SqueezeBertConfig configuration class: SqueezeBertForQuestionAnswering (SqueezeBERT model)
    • T5Config configuration class: T5ForQuestionAnswering (T5 model)
    • UMT5Config configuration class: UMT5ForQuestionAnswering (UMT5 model)
    • XLMConfig configuration class: XLMForQuestionAnsweringSimple (XLM model)
    • XLMRobertaConfig configuration class: XLMRobertaForQuestionAnswering (XLM-RoBERTa model)
    • XLMRobertaXLConfig configuration class: XLMRobertaXLForQuestionAnswering (XLM-RoBERTa-XL model)
    • XLNetConfig configuration class: XLNetForQuestionAnsweringSimple (XLNet model)
    • XmodConfig configuration class: XmodForQuestionAnswering (X-MOD model)
    • YosoConfig configuration class: YosoForQuestionAnswering (YOSO model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a question answering head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForQuestionAnswering.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertAlbertForQuestionAnswering (ALBERT model)
  • bartBartForQuestionAnswering (BART model)
  • bertBertForQuestionAnswering (BERT model)
  • big_birdBigBirdForQuestionAnswering (BigBird model)
  • bigbird_pegasusBigBirdPegasusForQuestionAnswering (BigBird-Pegasus model)
  • bloomBloomForQuestionAnswering (BLOOM model)
  • camembertCamembertForQuestionAnswering (CamemBERT model)
  • canineCanineForQuestionAnswering (CANINE model)
  • convbertConvBertForQuestionAnswering (ConvBERT model)
  • data2vec-textData2VecTextForQuestionAnswering (Data2VecText model)
  • debertaDebertaForQuestionAnswering (DeBERTa model)
  • deberta-v2DebertaV2ForQuestionAnswering (DeBERTa-v2 model)
  • distilbertDistilBertForQuestionAnswering (DistilBERT model)
  • electraElectraForQuestionAnswering (ELECTRA model)
  • ernieErnieForQuestionAnswering (ERNIE model)
  • ernie_mErnieMForQuestionAnswering (ErnieM model)
  • falconFalconForQuestionAnswering (Falcon model)
  • flaubertFlaubertForQuestionAnsweringSimple (FlauBERT model)
  • fnetFNetForQuestionAnswering (FNet model)
  • funnelFunnelForQuestionAnswering (Funnel Transformer model)
  • gpt2GPT2ForQuestionAnswering (OpenAI GPT-2 model)
  • gpt_neoGPTNeoForQuestionAnswering (GPT Neo model)
  • gpt_neoxGPTNeoXForQuestionAnswering (GPT NeoX model)
  • gptjGPTJForQuestionAnswering (GPT-J model)
  • ibertIBertForQuestionAnswering (I-BERT model)
  • layoutlmv2LayoutLMv2ForQuestionAnswering (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3ForQuestionAnswering (LayoutLMv3 model)
  • ledLEDForQuestionAnswering (LED model)
  • liltLiltForQuestionAnswering (LiLT model)
  • llamaLlamaForQuestionAnswering (LLaMA model)
  • longformerLongformerForQuestionAnswering (Longformer model)
  • lukeLukeForQuestionAnswering (LUKE model)
  • lxmertLxmertForQuestionAnswering (LXMERT model)
  • markuplmMarkupLMForQuestionAnswering (MarkupLM model)
  • mbartMBartForQuestionAnswering (mBART model)
  • megaMegaForQuestionAnswering (MEGA model)
  • megatron-bertMegatronBertForQuestionAnswering (Megatron-BERT model)
  • mobilebertMobileBertForQuestionAnswering (MobileBERT model)
  • mpnetMPNetForQuestionAnswering (MPNet model)
  • mptMptForQuestionAnswering (MPT model)
  • mraMraForQuestionAnswering (MRA model)
  • mt5MT5ForQuestionAnswering (MT5 model)
  • mvpMvpForQuestionAnswering (MVP model)
  • nemotronNemotronForQuestionAnswering (Nemotron model)
  • nezhaNezhaForQuestionAnswering (Nezha model)
  • nystromformerNystromformerForQuestionAnswering (Nyströmformer model)
  • optOPTForQuestionAnswering (OPT model)
  • qdqbertQDQBertForQuestionAnswering (QDQBert model)
  • reformerReformerForQuestionAnswering (Reformer model)
  • rembertRemBertForQuestionAnswering (RemBERT model)
  • robertaRobertaForQuestionAnswering (RoBERTa model)
  • roberta-prelayernormRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model)
  • roc_bertRoCBertForQuestionAnswering (RoCBert model)
  • roformerRoFormerForQuestionAnswering (RoFormer model)
  • splinterSplinterForQuestionAnswering (Splinter model)
  • squeezebertSqueezeBertForQuestionAnswering (SqueezeBERT model)
  • t5T5ForQuestionAnswering (T5 model)
  • umt5UMT5ForQuestionAnswering (UMT5 model)
  • xlmXLMForQuestionAnsweringSimple (XLM model)
  • xlm-robertaXLMRobertaForQuestionAnswering (XLM-RoBERTa model)
  • xlm-roberta-xlXLMRobertaXLForQuestionAnswering (XLM-RoBERTa-XL model)
  • xlnetXLNetForQuestionAnsweringSimple (XLNet model)
  • xmodXmodForQuestionAnswering (X-MOD model)
  • yosoYosoForQuestionAnswering (YOSO model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForQuestionAnswering.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForQuestionAnswering

class transformers.TFAutoModelForQuestionAnswering

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: TFAlbertForQuestionAnswering (ALBERT model)
    • BertConfig configuration class: TFBertForQuestionAnswering (BERT model)
    • CamembertConfig configuration class: TFCamembertForQuestionAnswering (CamemBERT model)
    • ConvBertConfig configuration class: TFConvBertForQuestionAnswering (ConvBERT model)
    • DebertaConfig configuration class: TFDebertaForQuestionAnswering (DeBERTa model)
    • DebertaV2Config configuration class: TFDebertaV2ForQuestionAnswering (DeBERTa-v2 model)
    • DistilBertConfig configuration class: TFDistilBertForQuestionAnswering (DistilBERT model)
    • ElectraConfig configuration class: TFElectraForQuestionAnswering (ELECTRA model)
    • FlaubertConfig configuration class: TFFlaubertForQuestionAnsweringSimple (FlauBERT model)
    • FunnelConfig configuration class: TFFunnelForQuestionAnswering (Funnel Transformer model)
    • GPTJConfig configuration class: TFGPTJForQuestionAnswering (GPT-J model)
    • LayoutLMv3Config configuration class: TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model)
    • LongformerConfig configuration class: TFLongformerForQuestionAnswering (Longformer model)
    • MPNetConfig configuration class: TFMPNetForQuestionAnswering (MPNet model)
    • MobileBertConfig configuration class: TFMobileBertForQuestionAnswering (MobileBERT model)
    • RemBertConfig configuration class: TFRemBertForQuestionAnswering (RemBERT model)
    • RoFormerConfig configuration class: TFRoFormerForQuestionAnswering (RoFormer model)
    • RobertaConfig configuration class: TFRobertaForQuestionAnswering (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model)
    • XLMConfig configuration class: TFXLMForQuestionAnsweringSimple (XLM model)
    • XLMRobertaConfig configuration class: TFXLMRobertaForQuestionAnswering (XLM-RoBERTa model)
    • XLNetConfig configuration class: TFXLNetForQuestionAnsweringSimple (XLNet model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a question answering head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForQuestionAnswering.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertTFAlbertForQuestionAnswering (ALBERT model)
  • bertTFBertForQuestionAnswering (BERT model)
  • camembertTFCamembertForQuestionAnswering (CamemBERT model)
  • convbertTFConvBertForQuestionAnswering (ConvBERT model)
  • debertaTFDebertaForQuestionAnswering (DeBERTa model)
  • deberta-v2TFDebertaV2ForQuestionAnswering (DeBERTa-v2 model)
  • distilbertTFDistilBertForQuestionAnswering (DistilBERT model)
  • electraTFElectraForQuestionAnswering (ELECTRA model)
  • flaubertTFFlaubertForQuestionAnsweringSimple (FlauBERT model)
  • funnelTFFunnelForQuestionAnswering (Funnel Transformer model)
  • gptjTFGPTJForQuestionAnswering (GPT-J model)
  • layoutlmv3TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model)
  • longformerTFLongformerForQuestionAnswering (Longformer model)
  • mobilebertTFMobileBertForQuestionAnswering (MobileBERT model)
  • mpnetTFMPNetForQuestionAnswering (MPNet model)
  • rembertTFRemBertForQuestionAnswering (RemBERT model)
  • robertaTFRobertaForQuestionAnswering (RoBERTa model)
  • roberta-prelayernormTFRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model)
  • roformerTFRoFormerForQuestionAnswering (RoFormer model)
  • xlmTFXLMForQuestionAnsweringSimple (XLM model)
  • xlm-robertaTFXLMRobertaForQuestionAnswering (XLM-RoBERTa model)
  • xlnetTFXLNetForQuestionAnsweringSimple (XLNet model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForQuestionAnswering.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForQuestionAnswering

class transformers.FlaxAutoModelForQuestionAnswering

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • AlbertConfig configuration class: FlaxAlbertForQuestionAnswering (ALBERT model)
    • BartConfig configuration class: FlaxBartForQuestionAnswering (BART model)
    • BertConfig configuration class: FlaxBertForQuestionAnswering (BERT model)
    • BigBirdConfig configuration class: FlaxBigBirdForQuestionAnswering (BigBird model)
    • DistilBertConfig configuration class: FlaxDistilBertForQuestionAnswering (DistilBERT model)
    • ElectraConfig configuration class: FlaxElectraForQuestionAnswering (ELECTRA model)
    • MBartConfig configuration class: FlaxMBartForQuestionAnswering (mBART model)
    • RoFormerConfig configuration class: FlaxRoFormerForQuestionAnswering (RoFormer model)
    • RobertaConfig configuration class: FlaxRobertaForQuestionAnswering (RoBERTa model)
    • RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model)
    • XLMRobertaConfig configuration class: FlaxXLMRobertaForQuestionAnswering (XLM-RoBERTa model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a question answering head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForQuestionAnswering.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • albertFlaxAlbertForQuestionAnswering (ALBERT model)
  • bartFlaxBartForQuestionAnswering (BART model)
  • bertFlaxBertForQuestionAnswering (BERT model)
  • big_birdFlaxBigBirdForQuestionAnswering (BigBird model)
  • distilbertFlaxDistilBertForQuestionAnswering (DistilBERT model)
  • electraFlaxElectraForQuestionAnswering (ELECTRA model)
  • mbartFlaxMBartForQuestionAnswering (mBART model)
  • robertaFlaxRobertaForQuestionAnswering (RoBERTa model)
  • roberta-prelayernormFlaxRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model)
  • roformerFlaxRoFormerForQuestionAnswering (RoFormer model)
  • xlm-robertaFlaxXLMRobertaForQuestionAnswering (XLM-RoBERTa model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForQuestionAnswering.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForTextEncoding

class transformers.AutoModelForTextEncoding

< >

( *args **kwargs )

TFAutoModelForTextEncoding

class transformers.TFAutoModelForTextEncoding

< >

( *args **kwargs )

Computer vision

以下の自動クラスは、次のコンピュータービジョンタスクに利用可能です。

AutoModelForDepthEstimation

class transformers.AutoModelForDepthEstimation

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a depth estimation head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • DPTConfig configuration class: DPTForDepthEstimation (DPT model)
    • DepthAnythingConfig configuration class: DepthAnythingForDepthEstimation (Depth Anything model)
    • GLPNConfig configuration class: GLPNForDepthEstimation (GLPN model)
    • ZoeDepthConfig configuration class: ZoeDepthForDepthEstimation (ZoeDepth model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a depth estimation head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForDepthEstimation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForDepthEstimation.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a depth estimation head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • depth_anythingDepthAnythingForDepthEstimation (Depth Anything model)
  • dptDPTForDepthEstimation (DPT model)
  • glpnGLPNForDepthEstimation (GLPN model)
  • zoedepthZoeDepthForDepthEstimation (ZoeDepth model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForDepthEstimation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForDepthEstimation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForDepthEstimation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForDepthEstimation.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForImageClassification

class transformers.AutoModelForImageClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BeitConfig configuration class: BeitForImageClassification (BEiT model)
    • BitConfig configuration class: BitForImageClassification (BiT model)
    • CLIPConfig configuration class: CLIPForImageClassification (CLIP model)
    • ConvNextConfig configuration class: ConvNextForImageClassification (ConvNeXT model)
    • ConvNextV2Config configuration class: ConvNextV2ForImageClassification (ConvNeXTV2 model)
    • CvtConfig configuration class: CvtForImageClassification (CvT model)
    • Data2VecVisionConfig configuration class: Data2VecVisionForImageClassification (Data2VecVision model)
    • DeiTConfig configuration class: DeiTForImageClassification or DeiTForImageClassificationWithTeacher (DeiT model)
    • DinatConfig configuration class: DinatForImageClassification (DiNAT model)
    • Dinov2Config configuration class: Dinov2ForImageClassification (DINOv2 model)
    • EfficientFormerConfig configuration class: EfficientFormerForImageClassification or EfficientFormerForImageClassificationWithTeacher (EfficientFormer model)
    • EfficientNetConfig configuration class: EfficientNetForImageClassification (EfficientNet model)
    • FocalNetConfig configuration class: FocalNetForImageClassification (FocalNet model)
    • HieraConfig configuration class: HieraForImageClassification (Hiera model)
    • ImageGPTConfig configuration class: ImageGPTForImageClassification (ImageGPT model)
    • LevitConfig configuration class: LevitForImageClassification or LevitForImageClassificationWithTeacher (LeViT model)
    • MobileNetV1Config configuration class: MobileNetV1ForImageClassification (MobileNetV1 model)
    • MobileNetV2Config configuration class: MobileNetV2ForImageClassification (MobileNetV2 model)
    • MobileViTConfig configuration class: MobileViTForImageClassification (MobileViT model)
    • MobileViTV2Config configuration class: MobileViTV2ForImageClassification (MobileViTV2 model)
    • NatConfig configuration class: NatForImageClassification (NAT model)
    • PerceiverConfig configuration class: PerceiverForImageClassificationLearned or PerceiverForImageClassificationFourier or PerceiverForImageClassificationConvProcessing (Perceiver model)
    • PoolFormerConfig configuration class: PoolFormerForImageClassification (PoolFormer model)
    • PvtConfig configuration class: PvtForImageClassification (PVT model)
    • PvtV2Config configuration class: PvtV2ForImageClassification (PVTv2 model)
    • RegNetConfig configuration class: RegNetForImageClassification (RegNet model)
    • ResNetConfig configuration class: ResNetForImageClassification (ResNet model)
    • SegformerConfig configuration class: SegformerForImageClassification (SegFormer model)
    • SiglipConfig configuration class: SiglipForImageClassification (SigLIP model)
    • SwiftFormerConfig configuration class: SwiftFormerForImageClassification (SwiftFormer model)
    • SwinConfig configuration class: SwinForImageClassification (Swin Transformer model)
    • Swinv2Config configuration class: Swinv2ForImageClassification (Swin Transformer V2 model)
    • VanConfig configuration class: VanForImageClassification (VAN model)
    • ViTConfig configuration class: ViTForImageClassification (ViT model)
    • ViTHybridConfig configuration class: ViTHybridForImageClassification (ViT Hybrid model)
    • ViTMSNConfig configuration class: ViTMSNForImageClassification (ViTMSN model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a image classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForImageClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForImageClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • beitBeitForImageClassification (BEiT model)
  • bitBitForImageClassification (BiT model)
  • clipCLIPForImageClassification (CLIP model)
  • convnextConvNextForImageClassification (ConvNeXT model)
  • convnextv2ConvNextV2ForImageClassification (ConvNeXTV2 model)
  • cvtCvtForImageClassification (CvT model)
  • data2vec-visionData2VecVisionForImageClassification (Data2VecVision model)
  • deitDeiTForImageClassification or DeiTForImageClassificationWithTeacher (DeiT model)
  • dinatDinatForImageClassification (DiNAT model)
  • dinov2Dinov2ForImageClassification (DINOv2 model)
  • efficientformerEfficientFormerForImageClassification or EfficientFormerForImageClassificationWithTeacher (EfficientFormer model)
  • efficientnetEfficientNetForImageClassification (EfficientNet model)
  • focalnetFocalNetForImageClassification (FocalNet model)
  • hieraHieraForImageClassification (Hiera model)
  • imagegptImageGPTForImageClassification (ImageGPT model)
  • levitLevitForImageClassification or LevitForImageClassificationWithTeacher (LeViT model)
  • mobilenet_v1MobileNetV1ForImageClassification (MobileNetV1 model)
  • mobilenet_v2MobileNetV2ForImageClassification (MobileNetV2 model)
  • mobilevitMobileViTForImageClassification (MobileViT model)
  • mobilevitv2MobileViTV2ForImageClassification (MobileViTV2 model)
  • natNatForImageClassification (NAT model)
  • perceiverPerceiverForImageClassificationLearned or PerceiverForImageClassificationFourier or PerceiverForImageClassificationConvProcessing (Perceiver model)
  • poolformerPoolFormerForImageClassification (PoolFormer model)
  • pvtPvtForImageClassification (PVT model)
  • pvt_v2PvtV2ForImageClassification (PVTv2 model)
  • regnetRegNetForImageClassification (RegNet model)
  • resnetResNetForImageClassification (ResNet model)
  • segformerSegformerForImageClassification (SegFormer model)
  • siglipSiglipForImageClassification (SigLIP model)
  • swiftformerSwiftFormerForImageClassification (SwiftFormer model)
  • swinSwinForImageClassification (Swin Transformer model)
  • swinv2Swinv2ForImageClassification (Swin Transformer V2 model)
  • vanVanForImageClassification (VAN model)
  • vitViTForImageClassification (ViT model)
  • vit_hybridViTHybridForImageClassification (ViT Hybrid model)
  • vit_msnViTMSNForImageClassification (ViTMSN model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForImageClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForImageClassification.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForImageClassification

class transformers.TFAutoModelForImageClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a image classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForImageClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForImageClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

Examples:

>>> from transformers import AutoConfig, TFAutoModelForImageClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForImageClassification.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForImageClassification

class transformers.FlaxAutoModelForImageClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BeitConfig configuration class: FlaxBeitForImageClassification (BEiT model)
    • RegNetConfig configuration class: FlaxRegNetForImageClassification (RegNet model)
    • ResNetConfig configuration class: FlaxResNetForImageClassification (ResNet model)
    • ViTConfig configuration class: FlaxViTForImageClassification (ViT model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a image classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForImageClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForImageClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • beitFlaxBeitForImageClassification (BEiT model)
  • regnetFlaxRegNetForImageClassification (RegNet model)
  • resnetFlaxResNetForImageClassification (ResNet model)
  • vitFlaxViTForImageClassification (ViT model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForImageClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForImageClassification.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForVideoClassification

class transformers.AutoModelForVideoClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a video classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • TimesformerConfig configuration class: TimesformerForVideoClassification (TimeSformer model)
    • VideoMAEConfig configuration class: VideoMAEForVideoClassification (VideoMAE model)
    • VivitConfig configuration class: VivitForVideoClassification (ViViT model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a video classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForVideoClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForVideoClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a video classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • timesformerTimesformerForVideoClassification (TimeSformer model)
  • videomaeVideoMAEForVideoClassification (VideoMAE model)
  • vivitVivitForVideoClassification (ViViT model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForVideoClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVideoClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForVideoClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForVideoClassification.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForMaskedImageModeling

class transformers.AutoModelForMaskedImageModeling

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • DeiTConfig configuration class: DeiTForMaskedImageModeling (DeiT model)
    • FocalNetConfig configuration class: FocalNetForMaskedImageModeling (FocalNet model)
    • SwinConfig configuration class: SwinForMaskedImageModeling (Swin Transformer model)
    • Swinv2Config configuration class: Swinv2ForMaskedImageModeling (Swin Transformer V2 model)
    • ViTConfig configuration class: ViTForMaskedImageModeling (ViT model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a masked image modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForMaskedImageModeling

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForMaskedImageModeling.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • deitDeiTForMaskedImageModeling (DeiT model)
  • focalnetFocalNetForMaskedImageModeling (FocalNet model)
  • swinSwinForMaskedImageModeling (Swin Transformer model)
  • swinv2Swinv2ForMaskedImageModeling (Swin Transformer V2 model)
  • vitViTForMaskedImageModeling (ViT model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForMaskedImageModeling

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForMaskedImageModeling.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForMaskedImageModeling

class transformers.TFAutoModelForMaskedImageModeling

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a masked image modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForMaskedImageModeling

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForMaskedImageModeling.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

Examples:

>>> from transformers import AutoConfig, TFAutoModelForMaskedImageModeling

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForMaskedImageModeling.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForObjectDetection

class transformers.AutoModelForObjectDetection

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a object detection head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

Instantiates one of the model classes of the library (with a object detection head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForObjectDetection

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForObjectDetection.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a object detection head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForObjectDetection

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForObjectDetection.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForObjectDetection.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForObjectDetection.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForImageSegmentation

class transformers.AutoModelForImageSegmentation

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a image segmentation head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a image segmentation head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForImageSegmentation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForImageSegmentation.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a image segmentation head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForImageSegmentation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageSegmentation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForImageSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForImageSegmentation.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForImageToImage

class transformers.AutoModelForImageToImage

< >

( *args **kwargs )

AutoModelForSemanticSegmentation

class transformers.AutoModelForSemanticSegmentation

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • BeitConfig configuration class: BeitForSemanticSegmentation (BEiT model)
    • DPTConfig configuration class: DPTForSemanticSegmentation (DPT model)
    • Data2VecVisionConfig configuration class: Data2VecVisionForSemanticSegmentation (Data2VecVision model)
    • MobileNetV2Config configuration class: MobileNetV2ForSemanticSegmentation (MobileNetV2 model)
    • MobileViTConfig configuration class: MobileViTForSemanticSegmentation (MobileViT model)
    • MobileViTV2Config configuration class: MobileViTV2ForSemanticSegmentation (MobileViTV2 model)
    • SegformerConfig configuration class: SegformerForSemanticSegmentation (SegFormer model)
    • UperNetConfig configuration class: UperNetForSemanticSegmentation (UPerNet model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForSemanticSegmentation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForSemanticSegmentation.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • beitBeitForSemanticSegmentation (BEiT model)
  • data2vec-visionData2VecVisionForSemanticSegmentation (Data2VecVision model)
  • dptDPTForSemanticSegmentation (DPT model)
  • mobilenet_v2MobileNetV2ForSemanticSegmentation (MobileNetV2 model)
  • mobilevitMobileViTForSemanticSegmentation (MobileViT model)
  • mobilevitv2MobileViTV2ForSemanticSegmentation (MobileViTV2 model)
  • segformerSegformerForSemanticSegmentation (SegFormer model)
  • upernetUperNetForSemanticSegmentation (UPerNet model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForSemanticSegmentation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForSemanticSegmentation.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForSemanticSegmentation

class transformers.TFAutoModelForSemanticSegmentation

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForSemanticSegmentation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForSemanticSegmentation.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • data2vec-visionTFData2VecVisionForSemanticSegmentation (Data2VecVision model)
  • mobilevitTFMobileViTForSemanticSegmentation (MobileViT model)
  • segformerTFSegformerForSemanticSegmentation (SegFormer model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForSemanticSegmentation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForInstanceSegmentation

class transformers.AutoModelForInstanceSegmentation

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a instance segmentation head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • MaskFormerConfig configuration class: MaskFormerForInstanceSegmentation (MaskFormer model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a instance segmentation head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForInstanceSegmentation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForInstanceSegmentation.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a instance segmentation head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • maskformerMaskFormerForInstanceSegmentation (MaskFormer model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForInstanceSegmentation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForInstanceSegmentation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForInstanceSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForInstanceSegmentation.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForUniversalSegmentation

class transformers.AutoModelForUniversalSegmentation

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a universal image segmentation head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • DetrConfig configuration class: DetrForSegmentation (DETR model)
    • Mask2FormerConfig configuration class: Mask2FormerForUniversalSegmentation (Mask2Former model)
    • MaskFormerConfig configuration class: MaskFormerForInstanceSegmentation (MaskFormer model)
    • OneFormerConfig configuration class: OneFormerForUniversalSegmentation (OneFormer model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a universal image segmentation head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForUniversalSegmentation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForUniversalSegmentation.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a universal image segmentation head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • detrDetrForSegmentation (DETR model)
  • mask2formerMask2FormerForUniversalSegmentation (Mask2Former model)
  • maskformerMaskFormerForInstanceSegmentation (MaskFormer model)
  • oneformerOneFormerForUniversalSegmentation (OneFormer model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForUniversalSegmentation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForUniversalSegmentation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForUniversalSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForUniversalSegmentation.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForZeroShotImageClassification

class transformers.AutoModelForZeroShotImageClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot image classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

Instantiates one of the model classes of the library (with a zero-shot image classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForZeroShotImageClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForZeroShotImageClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a zero-shot image classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForZeroShotImageClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForZeroShotImageClassification.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForZeroShotImageClassification

class transformers.TFAutoModelForZeroShotImageClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot image classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a zero-shot image classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForZeroShotImageClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForZeroShotImageClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a zero-shot image classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

Examples:

>>> from transformers import AutoConfig, TFAutoModelForZeroShotImageClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForZeroShotImageClassification.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForZeroShotObjectDetection

class transformers.AutoModelForZeroShotObjectDetection

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot object detection head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • GroundingDinoConfig configuration class: GroundingDinoForObjectDetection (Grounding DINO model)
    • OwlViTConfig configuration class: OwlViTForObjectDetection (OWL-ViT model)
    • Owlv2Config configuration class: Owlv2ForObjectDetection (OWLv2 model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a zero-shot object detection head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForZeroShotObjectDetection.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a zero-shot object detection head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • grounding-dinoGroundingDinoForObjectDetection (Grounding DINO model)
  • owlv2Owlv2ForObjectDetection (OWLv2 model)
  • owlvitOwlViTForObjectDetection (OWL-ViT model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

Audio

以下の自動クラスは、次の音声タスクに利用可能です。

AutoModelForAudioClassification

class transformers.AutoModelForAudioClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • ASTConfig configuration class: ASTForAudioClassification (Audio Spectrogram Transformer model)
    • Data2VecAudioConfig configuration class: Data2VecAudioForSequenceClassification (Data2VecAudio model)
    • HubertConfig configuration class: HubertForSequenceClassification (Hubert model)
    • SEWConfig configuration class: SEWForSequenceClassification (SEW model)
    • SEWDConfig configuration class: SEWDForSequenceClassification (SEW-D model)
    • UniSpeechConfig configuration class: UniSpeechForSequenceClassification (UniSpeech model)
    • UniSpeechSatConfig configuration class: UniSpeechSatForSequenceClassification (UniSpeechSat model)
    • Wav2Vec2BertConfig configuration class: Wav2Vec2BertForSequenceClassification (Wav2Vec2-BERT model)
    • Wav2Vec2Config configuration class: Wav2Vec2ForSequenceClassification (Wav2Vec2 model)
    • Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForSequenceClassification (Wav2Vec2-Conformer model)
    • WavLMConfig configuration class: WavLMForSequenceClassification (WavLM model)
    • WhisperConfig configuration class: WhisperForAudioClassification (Whisper model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a audio classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForAudioClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForAudioClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • audio-spectrogram-transformerASTForAudioClassification (Audio Spectrogram Transformer model)
  • data2vec-audioData2VecAudioForSequenceClassification (Data2VecAudio model)
  • hubertHubertForSequenceClassification (Hubert model)
  • sewSEWForSequenceClassification (SEW model)
  • sew-dSEWDForSequenceClassification (SEW-D model)
  • unispeechUniSpeechForSequenceClassification (UniSpeech model)
  • unispeech-satUniSpeechSatForSequenceClassification (UniSpeechSat model)
  • wav2vec2Wav2Vec2ForSequenceClassification (Wav2Vec2 model)
  • wav2vec2-bertWav2Vec2BertForSequenceClassification (Wav2Vec2-BERT model)
  • wav2vec2-conformerWav2Vec2ConformerForSequenceClassification (Wav2Vec2-Conformer model)
  • wavlmWavLMForSequenceClassification (WavLM model)
  • whisperWhisperForAudioClassification (Whisper model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForAudioClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForAudioClassification.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForAudioFrameClassification

class transformers.TFAutoModelForAudioClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • Wav2Vec2Config configuration class: TFWav2Vec2ForSequenceClassification (Wav2Vec2 model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a audio classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForAudioClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForAudioClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • wav2vec2TFWav2Vec2ForSequenceClassification (Wav2Vec2 model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForAudioClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForAudioClassification.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

TFAutoModelForAudioFrameClassification

class transformers.AutoModelForAudioFrameClassification

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a audio frame (token) classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • Data2VecAudioConfig configuration class: Data2VecAudioForAudioFrameClassification (Data2VecAudio model)
    • UniSpeechSatConfig configuration class: UniSpeechSatForAudioFrameClassification (UniSpeechSat model)
    • Wav2Vec2BertConfig configuration class: Wav2Vec2BertForAudioFrameClassification (Wav2Vec2-BERT model)
    • Wav2Vec2Config configuration class: Wav2Vec2ForAudioFrameClassification (Wav2Vec2 model)
    • Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForAudioFrameClassification (Wav2Vec2-Conformer model)
    • WavLMConfig configuration class: WavLMForAudioFrameClassification (WavLM model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a audio frame (token) classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForAudioFrameClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForAudioFrameClassification.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a audio frame (token) classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • data2vec-audioData2VecAudioForAudioFrameClassification (Data2VecAudio model)
  • unispeech-satUniSpeechSatForAudioFrameClassification (UniSpeechSat model)
  • wav2vec2Wav2Vec2ForAudioFrameClassification (Wav2Vec2 model)
  • wav2vec2-bertWav2Vec2BertForAudioFrameClassification (Wav2Vec2-BERT model)
  • wav2vec2-conformerWav2Vec2ConformerForAudioFrameClassification (Wav2Vec2-Conformer model)
  • wavlmWavLMForAudioFrameClassification (WavLM model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForAudioFrameClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioFrameClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForAudioFrameClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForAudioFrameClassification.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForCTC

class transformers.AutoModelForCTC

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a connectionist temporal classification head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • Data2VecAudioConfig configuration class: Data2VecAudioForCTC (Data2VecAudio model)
    • HubertConfig configuration class: HubertForCTC (Hubert model)
    • MCTCTConfig configuration class: MCTCTForCTC (M-CTC-T model)
    • SEWConfig configuration class: SEWForCTC (SEW model)
    • SEWDConfig configuration class: SEWDForCTC (SEW-D model)
    • UniSpeechConfig configuration class: UniSpeechForCTC (UniSpeech model)
    • UniSpeechSatConfig configuration class: UniSpeechSatForCTC (UniSpeechSat model)
    • Wav2Vec2BertConfig configuration class: Wav2Vec2BertForCTC (Wav2Vec2-BERT model)
    • Wav2Vec2Config configuration class: Wav2Vec2ForCTC (Wav2Vec2 model)
    • Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForCTC (Wav2Vec2-Conformer model)
    • WavLMConfig configuration class: WavLMForCTC (WavLM model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a connectionist temporal classification head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForCTC

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForCTC.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a connectionist temporal classification head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • data2vec-audioData2VecAudioForCTC (Data2VecAudio model)
  • hubertHubertForCTC (Hubert model)
  • mctctMCTCTForCTC (M-CTC-T model)
  • sewSEWForCTC (SEW model)
  • sew-dSEWDForCTC (SEW-D model)
  • unispeechUniSpeechForCTC (UniSpeech model)
  • unispeech-satUniSpeechSatForCTC (UniSpeechSat model)
  • wav2vec2Wav2Vec2ForCTC (Wav2Vec2 model)
  • wav2vec2-bertWav2Vec2BertForCTC (Wav2Vec2-BERT model)
  • wav2vec2-conformerWav2Vec2ConformerForCTC (Wav2Vec2-Conformer model)
  • wavlmWavLMForCTC (WavLM model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForCTC

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCTC.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForCTC.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForCTC.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForSpeechSeq2Seq

class transformers.AutoModelForSpeechSeq2Seq

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • Pop2PianoConfig configuration class: Pop2PianoForConditionalGeneration (Pop2Piano model)
    • SeamlessM4TConfig configuration class: SeamlessM4TForSpeechToText (SeamlessM4T model)
    • SeamlessM4Tv2Config configuration class: SeamlessM4Tv2ForSpeechToText (SeamlessM4Tv2 model)
    • Speech2TextConfig configuration class: Speech2TextForConditionalGeneration (Speech2Text model)
    • SpeechEncoderDecoderConfig configuration class: SpeechEncoderDecoderModel (Speech Encoder decoder model)
    • SpeechT5Config configuration class: SpeechT5ForSpeechToText (SpeechT5 model)
    • WhisperConfig configuration class: WhisperForConditionalGeneration (Whisper model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForSpeechSeq2Seq.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • pop2pianoPop2PianoForConditionalGeneration (Pop2Piano model)
  • seamless_m4tSeamlessM4TForSpeechToText (SeamlessM4T model)
  • seamless_m4t_v2SeamlessM4Tv2ForSpeechToText (SeamlessM4Tv2 model)
  • speech-encoder-decoderSpeechEncoderDecoderModel (Speech Encoder decoder model)
  • speech_to_textSpeech2TextForConditionalGeneration (Speech2Text model)
  • speecht5SpeechT5ForSpeechToText (SpeechT5 model)
  • whisperWhisperForConditionalGeneration (Whisper model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForSpeechSeq2Seq

class transformers.TFAutoModelForSpeechSeq2Seq

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • Speech2TextConfig configuration class: TFSpeech2TextForConditionalGeneration (Speech2Text model)
    • WhisperConfig configuration class: TFWhisperForConditionalGeneration (Whisper model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForSpeechSeq2Seq

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForSpeechSeq2Seq.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • speech_to_textTFSpeech2TextForConditionalGeneration (Speech2Text model)
  • whisperTFWhisperForConditionalGeneration (Whisper model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForSpeechSeq2Seq

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForSpeechSeq2Seq

class transformers.FlaxAutoModelForSpeechSeq2Seq

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • SpeechEncoderDecoderConfig configuration class: FlaxSpeechEncoderDecoderModel (Speech Encoder decoder model)
    • WhisperConfig configuration class: FlaxWhisperForConditionalGeneration (Whisper model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForSpeechSeq2Seq

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForSpeechSeq2Seq.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • speech-encoder-decoderFlaxSpeechEncoderDecoderModel (Speech Encoder decoder model)
  • whisperFlaxWhisperForConditionalGeneration (Whisper model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForSpeechSeq2Seq

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForAudioXVector

class transformers.AutoModelForAudioXVector

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a audio retrieval via x-vector head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • Data2VecAudioConfig configuration class: Data2VecAudioForXVector (Data2VecAudio model)
    • UniSpeechSatConfig configuration class: UniSpeechSatForXVector (UniSpeechSat model)
    • Wav2Vec2BertConfig configuration class: Wav2Vec2BertForXVector (Wav2Vec2-BERT model)
    • Wav2Vec2Config configuration class: Wav2Vec2ForXVector (Wav2Vec2 model)
    • Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForXVector (Wav2Vec2-Conformer model)
    • WavLMConfig configuration class: WavLMForXVector (WavLM model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a audio retrieval via x-vector head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForAudioXVector

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForAudioXVector.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a audio retrieval via x-vector head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • data2vec-audioData2VecAudioForXVector (Data2VecAudio model)
  • unispeech-satUniSpeechSatForXVector (UniSpeechSat model)
  • wav2vec2Wav2Vec2ForXVector (Wav2Vec2 model)
  • wav2vec2-bertWav2Vec2BertForXVector (Wav2Vec2-BERT model)
  • wav2vec2-conformerWav2Vec2ConformerForXVector (Wav2Vec2-Conformer model)
  • wavlmWavLMForXVector (WavLM model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForAudioXVector

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioXVector.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForAudioXVector.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForAudioXVector.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForTextToSpectrogram

class transformers.AutoModelForTextToSpectrogram

< >

( *args **kwargs )

AutoModelForTextToWaveform

class transformers.AutoModelForTextToWaveform

< >

( *args **kwargs )

Multimodal

以下の自動クラスは、次のマルチモーダルタスクに利用可能です。

AutoModelForTableQuestionAnswering

class transformers.AutoModelForTableQuestionAnswering

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • TapasConfig configuration class: TapasForQuestionAnswering (TAPAS model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a table question answering head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google/tapas-base-finetuned-wtq")
>>> model = AutoModelForTableQuestionAnswering.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • tapasTapasForQuestionAnswering (TAPAS model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")

>>> # Update configuration during loading
>>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/tapas_tf_model_config.json")
>>> model = AutoModelForTableQuestionAnswering.from_pretrained(
...     "./tf_model/tapas_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForTableQuestionAnswering

class transformers.TFAutoModelForTableQuestionAnswering

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • TapasConfig configuration class: TFTapasForQuestionAnswering (TAPAS model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a table question answering head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForTableQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google/tapas-base-finetuned-wtq")
>>> model = TFAutoModelForTableQuestionAnswering.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • tapasTFTapasForQuestionAnswering (TAPAS model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForTableQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")

>>> # Update configuration during loading
>>> model = TFAutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/tapas_pt_model_config.json")
>>> model = TFAutoModelForTableQuestionAnswering.from_pretrained(
...     "./pt_model/tapas_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForDocumentQuestionAnswering

class transformers.AutoModelForDocumentQuestionAnswering

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • LayoutLMConfig configuration class: LayoutLMForQuestionAnswering (LayoutLM model)
    • LayoutLMv2Config configuration class: LayoutLMv2ForQuestionAnswering (LayoutLMv2 model)
    • LayoutLMv3Config configuration class: LayoutLMv3ForQuestionAnswering (LayoutLMv3 model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a document question answering head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> model = AutoModelForDocumentQuestionAnswering.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • layoutlmLayoutLMForQuestionAnswering (LayoutLM model)
  • layoutlmv2LayoutLMv2ForQuestionAnswering (LayoutLMv2 model)
  • layoutlmv3LayoutLMv3ForQuestionAnswering (LayoutLMv3 model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")

>>> # Update configuration during loading
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/layoutlm_tf_model_config.json")
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(
...     "./tf_model/layoutlm_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForDocumentQuestionAnswering

class transformers.TFAutoModelForDocumentQuestionAnswering

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • LayoutLMConfig configuration class: TFLayoutLMForQuestionAnswering (LayoutLM model)
    • LayoutLMv3Config configuration class: TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a document question answering head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> model = TFAutoModelForDocumentQuestionAnswering.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • layoutlmTFLayoutLMForQuestionAnswering (LayoutLM model)
  • layoutlmv3TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model)

Examples:

>>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")

>>> # Update configuration during loading
>>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/layoutlm_pt_model_config.json")
>>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained(
...     "./pt_model/layoutlm_pytorch_model.bin", from_pt=True, config=config
... )

AutoModelForVisualQuestionAnswering

class transformers.AutoModelForVisualQuestionAnswering

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a visual question answering head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

Instantiates one of the model classes of the library (with a visual question answering head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
>>> model = AutoModelForVisualQuestionAnswering.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a visual question answering head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")

>>> # Update configuration during loading
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/vilt_tf_model_config.json")
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained(
...     "./tf_model/vilt_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

AutoModelForVision2Seq

class transformers.AutoModelForVision2Seq

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • Blip2Config configuration class: Blip2ForConditionalGeneration (BLIP-2 model)
    • BlipConfig configuration class: BlipForConditionalGeneration (BLIP model)
    • ChameleonConfig configuration class: ChameleonForConditionalGeneration (Chameleon model)
    • GitConfig configuration class: GitForCausalLM (GIT model)
    • Idefics2Config configuration class: Idefics2ForConditionalGeneration (Idefics2 model)
    • InstructBlipConfig configuration class: InstructBlipForConditionalGeneration (InstructBLIP model)
    • InstructBlipVideoConfig configuration class: InstructBlipVideoForConditionalGeneration (InstructBlipVideo model)
    • Kosmos2Config configuration class: Kosmos2ForConditionalGeneration (KOSMOS-2 model)
    • LlavaConfig configuration class: LlavaForConditionalGeneration (LLaVa model)
    • LlavaNextConfig configuration class: LlavaNextForConditionalGeneration (LLaVA-NeXT model)
    • LlavaNextVideoConfig configuration class: LlavaNextVideoForConditionalGeneration (LLaVa-NeXT-Video model)
    • PaliGemmaConfig configuration class: PaliGemmaForConditionalGeneration (PaliGemma model)
    • Pix2StructConfig configuration class: Pix2StructForConditionalGeneration (Pix2Struct model)
    • VideoLlavaConfig configuration class: VideoLlavaForConditionalGeneration (VideoLlava model)
    • VipLlavaConfig configuration class: VipLlavaForConditionalGeneration (VipLlava model)
    • VisionEncoderDecoderConfig configuration class: VisionEncoderDecoderModel (Vision Encoder decoder model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, AutoModelForVision2Seq

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForVision2Seq.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • blipBlipForConditionalGeneration (BLIP model)
  • blip-2Blip2ForConditionalGeneration (BLIP-2 model)
  • chameleonChameleonForConditionalGeneration (Chameleon model)
  • gitGitForCausalLM (GIT model)
  • idefics2Idefics2ForConditionalGeneration (Idefics2 model)
  • instructblipInstructBlipForConditionalGeneration (InstructBLIP model)
  • instructblipvideoInstructBlipVideoForConditionalGeneration (InstructBlipVideo model)
  • kosmos-2Kosmos2ForConditionalGeneration (KOSMOS-2 model)
  • llavaLlavaForConditionalGeneration (LLaVa model)
  • llava-next-videoLlavaNextVideoForConditionalGeneration (LLaVa-NeXT-Video model)
  • llava_nextLlavaNextForConditionalGeneration (LLaVA-NeXT model)
  • paligemmaPaliGemmaForConditionalGeneration (PaliGemma model)
  • pix2structPix2StructForConditionalGeneration (Pix2Struct model)
  • video_llavaVideoLlavaForConditionalGeneration (VideoLlava model)
  • vipllavaVipLlavaForConditionalGeneration (VipLlava model)
  • vision-encoder-decoderVisionEncoderDecoderModel (Vision Encoder decoder model)

The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()

Examples:

>>> from transformers import AutoConfig, AutoModelForVision2Seq

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForVision2Seq.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )

TFAutoModelForVision2Seq

class transformers.TFAutoModelForVision2Seq

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, TFAutoModelForVision2Seq

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = TFAutoModelForVision2Seq.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

Examples:

>>> from transformers import AutoConfig, TFAutoModelForVision2Seq

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForVision2Seq.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

FlaxAutoModelForVision2Seq

class transformers.FlaxAutoModelForVision2Seq

< >

( *args **kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

< >

( **kwargs )

Parameters

  • config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:

    • VisionEncoderDecoderConfig configuration class: FlaxVisionEncoderDecoderModel (Vision Encoder decoder model)
  • attn_implementation (str, optional) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention), or "flash_attention_2" (using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation.

Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForVision2Seq

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = FlaxAutoModelForVision2Seq.from_config(config)

from_pretrained

< >

( *model_args **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
    • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.
  • config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).
    • The model was saved using save_pretrained() and is reloaded by supplying the save directory.
    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)
    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

  • vision-encoder-decoderFlaxVisionEncoderDecoderModel (Vision Encoder decoder model)

Examples:

>>> from transformers import AutoConfig, FlaxAutoModelForVision2Seq

>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = FlaxAutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForVision2Seq.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
< > Update on GitHub