title
stringlengths 0
82
| url
stringlengths 46
131
| markdown
stringlengths 193
178k
| html
stringlengths 243
3.7M
| crawlDate
stringlengths 24
24
|
---|---|---|---|---|
REALM | https://huggingface.co/docs/transformers/model_doc/realm | ## [](#overview)Overview
The REALM model was proposed in [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. It’s a retrieval-augmented language model that firstly retrieves documents from a textual knowledge corpus and then utilizes retrieved documents to process question answering tasks.
The abstract from the paper is the following:
_Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity._
This model was contributed by [qqaatw](https://huggingface.co/qqaatw). The original code can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## [](#transformers.RealmConfig)RealmConfig
### class transformers.RealmConfig
[](#transformers.RealmConfig)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/configuration_realm.py#L44)
( vocab\_size = 30522hidden\_size = 768retriever\_proj\_size = 128num\_hidden\_layers = 12num\_attention\_heads = 12num\_candidates = 8intermediate\_size = 3072hidden\_act = 'gelu\_new'hidden\_dropout\_prob = 0.1attention\_probs\_dropout\_prob = 0.1max\_position\_embeddings = 512type\_vocab\_size = 2initializer\_range = 0.02layer\_norm\_eps = 1e-12span\_hidden\_size = 256max\_span\_width = 10reader\_layer\_norm\_eps = 0.001reader\_beam\_size = 5reader\_seq\_len = 320num\_block\_records = 13353718searcher\_beam\_size = 5000pad\_token\_id = 1bos\_token\_id = 0eos\_token\_id = 2\*\*kwargs )
This is the configuration class to store the configuration of
1. [RealmEmbedder](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmEmbedder)
2. [RealmScorer](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmScorer)
3. [RealmKnowledgeAugEncoder](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder)
4. [RealmRetriever](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmRetriever)
5. [RealmReader](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmReader)
6. [RealmForOpenQA](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmForOpenQA)
It is used to instantiate an REALM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the REALM [google/realm-cc-news-pretrained-embedder](https://huggingface.co/google/realm-cc-news-pretrained-embedder) architecture.
Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information.
Example:
```
>>> from transformers import RealmConfig, RealmEmbedder
>>>
>>> configuration = RealmConfig()
>>>
>>> model = RealmEmbedder(configuration)
>>>
>>> configuration = model.config```
## [](#transformers.RealmTokenizer)RealmTokenizer
### class transformers.RealmTokenizer
[](#transformers.RealmTokenizer)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L95)
( vocab\_filedo\_lower\_case = Truedo\_basic\_tokenize = Truenever\_split = Noneunk\_token = '\[UNK\]'sep\_token = '\[SEP\]'pad\_token = '\[PAD\]'cls\_token = '\[CLS\]'mask\_token = '\[MASK\]'tokenize\_chinese\_chars = Truestrip\_accents = None\*\*kwargs )
Construct a REALM tokenizer.
[RealmTokenizer](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmTokenizer) is identical to [BertTokenizer](/docs/transformers/v4.30.0/en/model_doc/bert#transformers.BertTokenizer) and runs end-to-end tokenization: punctuation splitting and wordpiece.
This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
#### build\_inputs\_with\_special\_tokens
[](#transformers.RealmTokenizer.build_inputs_with_special_tokens)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L301)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]`
Parameters
- [](#transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added.
- [](#transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A REALM sequence has the following format:
- single sequence: `[CLS] X [SEP]`
- pair of sequences: `[CLS] A [SEP] B [SEP]`
#### get\_special\_tokens\_mask
[](#transformers.RealmTokenizer.get_special_tokens_mask)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L326)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]`
Parameters
- [](#transformers.RealmTokenizer.get_special_tokens_mask.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs.
- [](#transformers.RealmTokenizer.get_special_tokens_mask.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
- [](#transformers.RealmTokenizer.get_special_tokens_mask.already_has_special_tokens)**already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method.
#### create\_token\_type\_ids\_from\_sequences
[](#transformers.RealmTokenizer.create_token_type_ids_from_sequences)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L354)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]`
Parameters
- [](#transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs.
- [](#transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A REALM sequence
pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |```
If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s).
#### save\_vocabulary
[](#transformers.RealmTokenizer.save_vocabulary)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L383)
( save\_directory: strfilename\_prefix: typing.Optional\[str\] = None )
#### batch\_encode\_candidates
[](#transformers.RealmTokenizer.batch_encode_candidates)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L228)
( text\*\*kwargs ) → [BatchEncoding](/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.BatchEncoding)
Parameters
- [](#transformers.RealmTokenizer.batch_encode_candidates.text)**text** (`List[List[str]]`) — The batch of sequences to be encoded. Each sequence must be in this format: (batch\_size, num\_candidates, text).
- [](#transformers.RealmTokenizer.batch_encode_candidates.text_pair)**text\_pair** (`List[List[str]]`, _optional_) — The batch of sequences to be encoded. Each sequence must be in this format: (batch\_size, num\_candidates, text). \*\*kwargs — Keyword arguments of the **call** method.
Encoded text or text pair.
Encode a batch of text or text pair. This method is similar to regular **call** method but has the following differences:
1. Handle additional num\_candidate axis. (batch\_size, num\_candidates, text)
2. Always pad the sequences to _max\_length_.
3. Must specify _max\_length_ in order to stack packs of candidates into a batch.
- single sequence: `[CLS] X [SEP]`
- pair of sequences: `[CLS] A [SEP] B [SEP]`
Example:
```
>>> from transformers import RealmTokenizer
>>>
>>> text = [["Hello world!", "Nice to meet you!"], ["The cute cat.", "The adorable dog."]]
>>> tokenizer = RealmTokenizer.from_pretrained("google/realm-cc-news-pretrained-encoder")
>>> tokenized_text = tokenizer.batch_encode_candidates(text, max_length=10, return_tensors="pt")```
## [](#transformers.RealmTokenizerFast)RealmTokenizerFast
### class transformers.RealmTokenizerFast
[](#transformers.RealmTokenizerFast)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm_fast.py#L102)
( vocab\_file = Nonetokenizer\_file = Nonedo\_lower\_case = Trueunk\_token = '\[UNK\]'sep\_token = '\[SEP\]'pad\_token = '\[PAD\]'cls\_token = '\[CLS\]'mask\_token = '\[MASK\]'tokenize\_chinese\_chars = Truestrip\_accents = None\*\*kwargs )
Construct a “fast” REALM tokenizer (backed by HuggingFace’s _tokenizers_ library). Based on WordPiece.
[RealmTokenizerFast](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmTokenizerFast) is identical to [BertTokenizerFast](/docs/transformers/v4.30.0/en/model_doc/bert#transformers.BertTokenizerFast) and runs end-to-end tokenization: punctuation splitting and wordpiece.
This tokenizer inherits from [PreTrainedTokenizerFast](/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
#### batch\_encode\_candidates
[](#transformers.RealmTokenizerFast.batch_encode_candidates)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm_fast.py#L193)
( text\*\*kwargs ) → [BatchEncoding](/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.BatchEncoding)
Parameters
- [](#transformers.RealmTokenizerFast.batch_encode_candidates.text)**text** (`List[List[str]]`) — The batch of sequences to be encoded. Each sequence must be in this format: (batch\_size, num\_candidates, text).
- [](#transformers.RealmTokenizerFast.batch_encode_candidates.text_pair)**text\_pair** (`List[List[str]]`, _optional_) — The batch of sequences to be encoded. Each sequence must be in this format: (batch\_size, num\_candidates, text). \*\*kwargs — Keyword arguments of the **call** method.
Encoded text or text pair.
Encode a batch of text or text pair. This method is similar to regular **call** method but has the following differences:
1. Handle additional num\_candidate axis. (batch\_size, num\_candidates, text)
2. Always pad the sequences to _max\_length_.
3. Must specify _max\_length_ in order to stack packs of candidates into a batch.
- single sequence: `[CLS] X [SEP]`
- pair of sequences: `[CLS] A [SEP] B [SEP]`
Example:
```
>>> from transformers import RealmTokenizerFast
>>>
>>> text = [["Hello world!", "Nice to meet you!"], ["The cute cat.", "The adorable dog."]]
>>> tokenizer = RealmTokenizerFast.from_pretrained("google/realm-cc-news-pretrained-encoder")
>>> tokenized_text = tokenizer.batch_encode_candidates(text, max_length=10, return_tensors="pt")```
## [](#transformers.RealmRetriever)RealmRetriever
### class transformers.RealmRetriever
[](#transformers.RealmRetriever)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/retrieval_realm.py#L72)
( block\_recordstokenizer )
Parameters
- [](#transformers.RealmRetriever.block_records)**block\_records** (`np.ndarray`) — A numpy array which cantains evidence texts.
- [](#transformers.RealmRetriever.tokenizer)**tokenizer** ([RealmTokenizer](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmTokenizer)) — The tokenizer to encode retrieved texts.
The retriever of REALM outputting the retrieved evidence block and whether the block has answers as well as answer positions.”
#### block\_has\_answer
[](#transformers.RealmRetriever.block_has_answer)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/retrieval_realm.py#L129)
( concat\_inputsanswer\_ids )
check if retrieved\_blocks has answers.
## [](#transformers.RealmEmbedder)RealmEmbedder
### class transformers.RealmEmbedder
[](#transformers.RealmEmbedder)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1149)
( config )
Parameters
- [](#transformers.RealmEmbedder.config)**config** ([RealmConfig](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
The embedder of REALM outputting projected score that will be used to calculate relevance score. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.RealmEmbedder.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1165)
( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.realm.modeling_realm.RealmEmbedderOutput` or `tuple(torch.FloatTensor)`
The [RealmEmbedder](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmEmbedder) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
```
>>> from transformers import AutoTokenizer, RealmEmbedder
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("google/realm-cc-news-pretrained-embedder")
>>> model = RealmEmbedder.from_pretrained("google/realm-cc-news-pretrained-embedder")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> projected_score = outputs.projected_score```
## [](#transformers.RealmScorer)RealmScorer
### class transformers.RealmScorer
[](#transformers.RealmScorer)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1231)
( configquery\_embedder = None )
Parameters
- [](#transformers.RealmScorer.config)**config** ([RealmConfig](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
- [](#transformers.RealmScorer.query_embedder)**query\_embedder** ([RealmEmbedder](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmEmbedder)) — Embedder for input sequences. If not specified, it will use the same embedder as candidate sequences.
The scorer of REALM outputting relevance scores representing the score of document candidates (before softmax). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.RealmScorer.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1247)
( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonecandidate\_input\_ids: typing.Optional\[torch.LongTensor\] = Nonecandidate\_attention\_mask: typing.Optional\[torch.FloatTensor\] = Nonecandidate\_token\_type\_ids: typing.Optional\[torch.LongTensor\] = Nonecandidate\_inputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.realm.modeling_realm.RealmScorerOutput` or `tuple(torch.FloatTensor)`
The [RealmScorer](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmScorer) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
```
>>> import torch
>>> from transformers import AutoTokenizer, RealmScorer
>>> tokenizer = AutoTokenizer.from_pretrained("google/realm-cc-news-pretrained-scorer")
>>> model = RealmScorer.from_pretrained("google/realm-cc-news-pretrained-scorer", num_candidates=2)
>>>
>>> input_texts = ["How are you?", "What is the item in the picture?"]
>>> candidates_texts = [["Hello world!", "Nice to meet you!"], ["A cute cat.", "An adorable dog."]]
>>> inputs = tokenizer(input_texts, return_tensors="pt")
>>> candidates_inputs = tokenizer.batch_encode_candidates(candidates_texts, max_length=10, return_tensors="pt")
>>> outputs = model(
... **inputs,
... candidate_input_ids=candidates_inputs.input_ids,
... candidate_attention_mask=candidates_inputs.attention_mask,
... candidate_token_type_ids=candidates_inputs.token_type_ids,
... )
>>> relevance_score = outputs.relevance_score```
## [](#transformers.RealmKnowledgeAugEncoder)RealmKnowledgeAugEncoder
### class transformers.RealmKnowledgeAugEncoder
[](#transformers.RealmKnowledgeAugEncoder)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1379)
( config )
Parameters
- [](#transformers.RealmKnowledgeAugEncoder.config)**config** ([RealmConfig](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
The knowledge-augmented encoder of REALM outputting masked language model logits and marginal log-likelihood loss. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.RealmKnowledgeAugEncoder.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1400)
( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonerelevance\_score: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Nonemlm\_mask: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MaskedLMOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or `tuple(torch.FloatTensor)`
The [RealmKnowledgeAugEncoder](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
```
>>> import torch
>>> from transformers import AutoTokenizer, RealmKnowledgeAugEncoder
>>> tokenizer = AutoTokenizer.from_pretrained("google/realm-cc-news-pretrained-encoder")
>>> model = RealmKnowledgeAugEncoder.from_pretrained(
... "google/realm-cc-news-pretrained-encoder", num_candidates=2
... )
>>>
>>> text = [["Hello world!", "Nice to meet you!"], ["The cute cat.", "The adorable dog."]]
>>> inputs = tokenizer.batch_encode_candidates(text, max_length=10, return_tensors="pt")
>>> outputs = model(**inputs)
>>> logits = outputs.logits```
## [](#transformers.RealmReader)RealmReader
### class transformers.RealmReader
[](#transformers.RealmReader)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1529)
( config )
Parameters
- [](#transformers.RealmReader.config)**config** ([RealmConfig](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
The reader of REALM. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.RealmReader.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1542)
( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonerelevance\_score: typing.Optional\[torch.FloatTensor\] = Noneblock\_mask: typing.Optional\[torch.BoolTensor\] = Nonestart\_positions: typing.Optional\[torch.LongTensor\] = Noneend\_positions: typing.Optional\[torch.LongTensor\] = Nonehas\_answers: typing.Optional\[torch.BoolTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.realm.modeling_realm.RealmReaderOutput` or `tuple(torch.FloatTensor)`
The [RealmReader](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmReader) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
## [](#transformers.RealmForOpenQA)RealmForOpenQA
### class transformers.RealmForOpenQA
[](#transformers.RealmForOpenQA)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1735)
( configretriever = None )
Parameters
- [](#transformers.RealmForOpenQA.config)**config** ([RealmConfig](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
`RealmForOpenQA` for end-to-end open domain question answering. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### block\_embedding\_to
[](#transformers.RealmForOpenQA.block_embedding_to)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1758)
( device )
Parameters
- [](#transformers.RealmForOpenQA.block_embedding_to.device)**device** (`str` or `torch.device`) — The device to which `self.block_emb` will be sent.
Send `self.block_emb` to a specific device.
#### forward
[](#transformers.RealmForOpenQA.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1768)
( input\_ids: typing.Optional\[torch.LongTensor\]attention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneanswer\_ids: typing.Optional\[torch.LongTensor\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.realm.modeling_realm.RealmForOpenQAOutput` or `tuple(torch.FloatTensor)`
The [RealmForOpenQA](/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmForOpenQA) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
```
>>> import torch
>>> from transformers import RealmForOpenQA, RealmRetriever, AutoTokenizer
>>> retriever = RealmRetriever.from_pretrained("google/realm-orqa-nq-openqa")
>>> tokenizer = AutoTokenizer.from_pretrained("google/realm-orqa-nq-openqa")
>>> model = RealmForOpenQA.from_pretrained("google/realm-orqa-nq-openqa", retriever=retriever)
>>> question = "Who is the pioneer in modern computer science?"
>>> question_ids = tokenizer([question], return_tensors="pt")
>>> answer_ids = tokenizer(
... ["alan mathison turing"],
... add_special_tokens=False,
... return_token_type_ids=False,
... return_attention_mask=False,
... ).input_ids
>>> reader_output, predicted_answer_ids = model(**question_ids, answer_ids=answer_ids, return_dict=False)
>>> predicted_answer = tokenizer.decode(predicted_answer_ids)
>>> loss = reader_output.loss``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="REALM">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/model_doc/realm">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>REALM</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"realm","sections":[{"local":"overview","title":"Overview"},{"local":"transformers.RealmConfig","title":"RealmConfig"},{"local":"transformers.RealmTokenizer","title":"RealmTokenizer"},{"local":"transformers.RealmTokenizerFast","title":"RealmTokenizerFast"},{"local":"transformers.RealmRetriever","title":"RealmRetriever"},{"local":"transformers.RealmEmbedder","title":"RealmEmbedder"},{"local":"transformers.RealmScorer","title":"RealmScorer"},{"local":"transformers.RealmKnowledgeAugEncoder","title":"RealmKnowledgeAugEncoder"},{"local":"transformers.RealmReader","title":"RealmReader"},{"local":"transformers.RealmForOpenQA","title":"RealmForOpenQA"}],"title":"REALM"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":true,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","isExpanded":true,"id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"model_doc/realm","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"REALM"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">REALM</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/albert">ALBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bart">BART </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/barthez">BARThez </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bartpho">BARTpho </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bert">BERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bert-generation">BertGeneration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bert-japanese">BertJapanese </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bertweet">Bertweet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/big_bird">BigBird </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bigbird_pegasus">BigBirdPegasus </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/biogpt">BioGpt </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blenderbot">Blenderbot </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blenderbot-small">Blenderbot Small </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bloom">BLOOM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bort">BORT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/byt5">ByT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/camembert">CamemBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/canine">CANINE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/codegen">CodeGen </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/convbert">ConvBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/cpm">CPM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/cpmant">CPMANT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ctrl">CTRL </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/deberta">DeBERTa </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/deberta-v2">DeBERTa-v2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/dialogpt">DialoGPT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/distilbert">DistilBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/dpr">DPR </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/electra">ELECTRA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/encoder-decoder">Encoder Decoder Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ernie">ERNIE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ernie_m">ErnieM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/esm">ESM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flan-t5">FLAN-T5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flan-ul2">FLAN-UL2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flaubert">FlauBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/fnet">FNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/fsmt">FSMT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/funnel">Funnel Transformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/openai-gpt">GPT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_neo">GPT Neo </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_neox">GPT NeoX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gptj">GPT-J </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt2">GPT2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_bigcode">GPTBigCode </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gptsan-japanese">GPTSAN Japanese </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt-sw3">GPTSw3 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/herbert">HerBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ibert">I-BERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/jukebox">Jukebox </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/led">LED </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/llama">LLaMA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/longformer">Longformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/longt5">LongT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/luke">LUKE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/m2m_100">M2M100 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/marian">MarianMT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/markuplm">MarkupLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mbart">MBart and MBart-50 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mega">MEGA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/megatron-bert">MegatronBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/megatron_gpt2">MegatronGPT2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mluke">mLUKE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mobilebert">MobileBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mpnet">MPNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mt5">MT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mvp">MVP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nezha">NEZHA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nllb">NLLB </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nllb-moe">NLLB-MoE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nystromformer">Nyströmformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/open-llama">Open-Llama </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/opt">OPT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/pegasus">Pegasus </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/pegasus_x">PEGASUS-X </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/phobert">PhoBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/plbart">PLBart </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/prophetnet">ProphetNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/qdqbert">QDQBert </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/rag">RAG </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/model_doc/realm">REALM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/reformer">Reformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/rembert">RemBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/retribert">RetriBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roberta">RoBERTa </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roc_bert">RoCBert </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roformer">RoFormer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/rwkv">RWKV </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/splinter">Splinter </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/squeezebert">SqueezeBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/switch_transformers">SwitchTransformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/t5">T5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/t5v1.1">T5v1.1 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/tapex">TAPEX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/transfo-xl">Transformer XL </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ul2">UL2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xmod">X-MOD </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xglm">XGLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm">XLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-roberta">XLM-RoBERTa </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-v">XLM-V </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlnet">XLNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="realm" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#realm"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>REALM</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Overview</span></h2> <p>The REALM model was proposed in <a href="https://arxiv.org/abs/2002.08909" rel="nofollow">REALM: Retrieval-Augmented Language Model Pre-Training</a> by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. It’s a
retrieval-augmented language model that firstly retrieves documents from a textual knowledge corpus and then
utilizes retrieved documents to process question answering tasks.</p> <p>The abstract from the paper is the following:</p> <p><em>Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks
such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network,
requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we
augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend
over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the
first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language
modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We
demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the
challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both
explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous
methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as
interpretability and modularity.</em></p> <p>This model was contributed by <a href="https://huggingface.co/qqaatw" rel="nofollow">qqaatw</a>. The original code can be found
<a href="https://github.com/google-research/language/tree/master/language/realm" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="transformers.RealmConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RealmConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmConfig</span></span></h3> <a id="transformers.RealmConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/configuration_realm.py#L44" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 30522</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">retriever_proj_size<span class="opacity-60"> = 128</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_candidates<span class="opacity-60"> = 8</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu_new'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">type_vocab_size<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">span_hidden_size<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_span_width<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">reader_layer_norm_eps<span class="opacity-60"> = 0.001</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">reader_beam_size<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">reader_seq_len<span class="opacity-60"> = 320</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_block_records<span class="opacity-60"> = 13353718</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">searcher_beam_size<span class="opacity-60"> = 5000</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 21 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 30522) —
Vocabulary size of the REALM model. Defines the number of different tokens that can be represented by the
<code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a>, <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmScorer">RealmScorer</a>, <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder">RealmKnowledgeAugEncoder</a>, or
<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmReader">RealmReader</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) —
Dimension of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.retriever_proj_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.retriever_proj_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>retriever_proj_size</strong> (<code>int</code>, <em>optional</em>, defaults to 128) —
Dimension of the retriever(embedder) projection.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) —
Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.num_candidates" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.num_candidates"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_candidates</strong> (<code>int</code>, <em>optional</em>, defaults to 8) —
Number of candidates inputted to the RealmScorer or RealmKnowledgeAugEncoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu_new"</code>) —
The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>,
<code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) —
The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.type_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.type_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>type_vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2) —
The vocabulary size of the <code>token_type_ids</code> passed when calling <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a>, <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmScorer">RealmScorer</a>,
<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder">RealmKnowledgeAugEncoder</a>, or <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmReader">RealmReader</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) —
The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.span_hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.span_hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>span_hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256) —
Dimension of the reader’s spans.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.max_span_width" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.max_span_width"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_span_width</strong> (<code>int</code>, <em>optional</em>, defaults to 10) —
Max span width of the reader.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.reader_layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.reader_layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>reader_layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-3) —
The epsilon used by the reader’s layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.reader_beam_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.reader_beam_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>reader_beam_size</strong> (<code>int</code>, <em>optional</em>, defaults to 5) —
Beam size of the reader.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.reader_seq_len" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.reader_seq_len"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>reader_seq_len</strong> (<code>int</code>, <em>optional</em>, defaults to 288+32) —
Maximum sequence length of the reader.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.num_block_records" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.num_block_records"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_block_records</strong> (<code>int</code>, <em>optional</em>, defaults to 13353718) —
Number of block records.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.searcher_beam_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.searcher_beam_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>searcher_beam_size</strong> (<code>int</code>, <em>optional</em>, defaults to 5000) —
Beam size of the searcher. Note that when eval mode is enabled, <em>searcher_beam_size</em> will be the same as
<em>reader_beam_size</em>.</span></span> </li></ul> </div></div> <p>This is the configuration class to store the configuration of</p> <ol><li><a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a></li> <li><a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmScorer">RealmScorer</a></li> <li><a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder">RealmKnowledgeAugEncoder</a></li> <li><a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmRetriever">RealmRetriever</a></li> <li><a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmReader">RealmReader</a></li> <li><a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmForOpenQA">RealmForOpenQA</a></li></ol> <p>It is used to instantiate an REALM model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the REALM
<a href="https://huggingface.co/google/realm-cc-news-pretrained-embedder" rel="nofollow">google/realm-cc-news-pretrained-embedder</a>
architecture.</p> <p>Configuration objects inherit from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the
documentation from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.RealmConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> RealmConfig, RealmEmbedder
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a REALM realm-cc-news-pretrained-* style configuration</span>
<span class="hljs-meta">>>> </span>configuration = RealmConfig()
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a model (with random weights) from the google/realm-cc-news-pretrained-embedder style configuration</span>
<span class="hljs-meta">>>> </span>model = RealmEmbedder(configuration)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Accessing the model configuration</span>
<span class="hljs-meta">>>> </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.RealmTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RealmTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmTokenizer</span></span></h3> <a id="transformers.RealmTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L95" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_basic_tokenize<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">never_split<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenize_chinese_chars<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">strip_accents<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) —
File containing the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to lowercase the input when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.do_basic_tokenize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.do_basic_tokenize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_basic_tokenize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to do basic tokenization before WordPiece.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.never_split" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.never_split"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>never_split</strong> (<code>Iterable</code>, <em>optional</em>) —
Collection of tokens which will never be split during tokenization. Only has an effect when
<code>do_basic_tokenize=True</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[UNK]"</code>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[SEP]"</code>) —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[PAD]"</code>) —
The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[CLS]"</code>) —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[MASK]"</code>) —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.tokenize_chinese_chars" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.tokenize_chinese_chars"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenize_chinese_chars</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to tokenize Chinese characters.<p></p>
<p>This should likely be deactivated for Japanese (see this
<a href="https://github.com/huggingface/transformers/issues/328" rel="nofollow">issue</a>).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.strip_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.strip_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>strip_accents</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for <code>lowercase</code> (as in the original BERT).</span></span> </li></ul> </div></div> <p>Construct a REALM tokenizer.</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmTokenizer">RealmTokenizer</a> is identical to <a href="/docs/transformers/v4.30.0/en/model_doc/bert#transformers.BertTokenizer">BertTokenizer</a> and runs end-to-end tokenization: punctuation splitting and
wordpiece.</p> <p>This tokenizer inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.RealmTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L301" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RealmTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p>
</p> </div></div> <p>Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A REALM sequence has the following format:</p> <ul><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.RealmTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L326" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.RealmTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p>
</p> </div></div> <p>Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.RealmTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L354" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RealmTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p>
</p> </div></div> <p>Create a mask from the two sequences passed to be used in a sequence-pair classification task. A REALM sequence</p> <div class="relative group rounded-md"><a id="transformers.RealmTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1
| first sequence | second sequence |</pre></div></div> <p>If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.RealmTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L383" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.batch_encode_candidates"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_encode_candidates</span></h4> <a id="transformers.RealmTokenizer.batch_encode_candidates" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.batch_encode_candidates"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm.py#L228" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.batch_encode_candidates.text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.batch_encode_candidates.text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text</strong> (<code>List[List[str]]</code>) —
The batch of sequences to be encoded. Each sequence must be in this format: (batch_size,
num_candidates, text).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.batch_encode_candidates.text_pair" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.batch_encode_candidates.text_pair"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_pair</strong> (<code>List[List[str]]</code>, <em>optional</em>) —
The batch of sequences to be encoded. Each sequence must be in this format: (batch_size,
num_candidates, text).
**kwargs —
Keyword arguments of the <strong>call</strong> method.</span></span> </li></ul> <div id="transformers.RealmTokenizer.batch_encode_candidates.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>Encoded text or text pair.</p>
</p> </div></div> <p>Encode a batch of text or text pair. This method is similar to regular <strong>call</strong> method but has the following
differences:</p> <ol><li>Handle additional num_candidate axis. (batch_size, num_candidates, text)</li> <li>Always pad the sequences to <em>max_length</em>.</li> <li>Must specify <em>max_length</em> in order to stack packs of candidates into a batch.</li></ol> <ul><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul> <div class="relative group rounded-md"><a id="transformers.RealmTokenizer.batch_encode_candidates.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.batch_encode_candidates.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> RealmTokenizer
<span class="hljs-meta">>>> </span><span class="hljs-comment"># batch_size = 2, num_candidates = 2</span>
<span class="hljs-meta">>>> </span>text = [[<span class="hljs-string">"Hello world!"</span>, <span class="hljs-string">"Nice to meet you!"</span>], [<span class="hljs-string">"The cute cat."</span>, <span class="hljs-string">"The adorable dog."</span>]]
<span class="hljs-meta">>>> </span>tokenizer = RealmTokenizer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-encoder"</span>)
<span class="hljs-meta">>>> </span>tokenized_text = tokenizer.batch_encode_candidates(text, max_length=<span class="hljs-number">10</span>, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmTokenizerFast" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RealmTokenizerFast</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizerFast"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmTokenizerFast</span></span></h3> <a id="transformers.RealmTokenizerFast" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizerFast"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm_fast.py#L102" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenize_chinese_chars<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">strip_accents<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) —
File containing the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to lowercase the input when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[UNK]"</code>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[SEP]"</code>) —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[PAD]"</code>) —
The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[CLS]"</code>) —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[MASK]"</code>) —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.clean_text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.clean_text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>clean_text</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.tokenize_chinese_chars" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.tokenize_chinese_chars"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenize_chinese_chars</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see <a href="https://github.com/huggingface/transformers/issues/328" rel="nofollow">this
issue</a>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.strip_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.strip_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>strip_accents</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for <code>lowercase</code> (as in the original BERT).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.wordpieces_prefix" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.wordpieces_prefix"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>wordpieces_prefix</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"##"</code>) —
The prefix for subwords.</span></span> </li></ul> </div></div> <p>Construct a “fast” REALM tokenizer (backed by HuggingFace’s <em>tokenizers</em> library). Based on WordPiece.</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmTokenizerFast">RealmTokenizerFast</a> is identical to <a href="/docs/transformers/v4.30.0/en/model_doc/bert#transformers.BertTokenizerFast">BertTokenizerFast</a> and runs end-to-end tokenization: punctuation
splitting and wordpiece.</p> <p>This tokenizer inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a> which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizerFast.batch_encode_candidates"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_encode_candidates</span></h4> <a id="transformers.RealmTokenizerFast.batch_encode_candidates" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizerFast.batch_encode_candidates"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/tokenization_realm_fast.py#L193" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.batch_encode_candidates.text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.batch_encode_candidates.text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text</strong> (<code>List[List[str]]</code>) —
The batch of sequences to be encoded. Each sequence must be in this format: (batch_size,
num_candidates, text).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.batch_encode_candidates.text_pair" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.batch_encode_candidates.text_pair"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_pair</strong> (<code>List[List[str]]</code>, <em>optional</em>) —
The batch of sequences to be encoded. Each sequence must be in this format: (batch_size,
num_candidates, text).
**kwargs —
Keyword arguments of the <strong>call</strong> method.</span></span> </li></ul> <div id="transformers.RealmTokenizerFast.batch_encode_candidates.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>Encoded text or text pair.</p>
</p> </div></div> <p>Encode a batch of text or text pair. This method is similar to regular <strong>call</strong> method but has the following
differences:</p> <ol><li>Handle additional num_candidate axis. (batch_size, num_candidates, text)</li> <li>Always pad the sequences to <em>max_length</em>.</li> <li>Must specify <em>max_length</em> in order to stack packs of candidates into a batch.</li></ol> <ul><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul> <div class="relative group rounded-md"><a id="transformers.RealmTokenizerFast.batch_encode_candidates.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.batch_encode_candidates.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> RealmTokenizerFast
<span class="hljs-meta">>>> </span><span class="hljs-comment"># batch_size = 2, num_candidates = 2</span>
<span class="hljs-meta">>>> </span>text = [[<span class="hljs-string">"Hello world!"</span>, <span class="hljs-string">"Nice to meet you!"</span>], [<span class="hljs-string">"The cute cat."</span>, <span class="hljs-string">"The adorable dog."</span>]]
<span class="hljs-meta">>>> </span>tokenizer = RealmTokenizerFast.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-encoder"</span>)
<span class="hljs-meta">>>> </span>tokenized_text = tokenizer.batch_encode_candidates(text, max_length=<span class="hljs-number">10</span>, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmRetriever" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmRetriever"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RealmRetriever</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmRetriever"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmRetriever</span></span></h3> <a id="transformers.RealmRetriever" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmRetriever"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/retrieval_realm.py#L72" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">block_records<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmRetriever.block_records" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmRetriever.block_records"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>block_records</strong> (<code>np.ndarray</code>) —
A numpy array which cantains evidence texts.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmRetriever.tokenizer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmRetriever.tokenizer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenizer</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmTokenizer">RealmTokenizer</a>) —
The tokenizer to encode retrieved texts.</span></span> </li></ul> </div></div> <p>The retriever of REALM outputting the retrieved evidence block and whether the block has answers as well as answer
positions.”</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmRetriever.block_has_answer"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>block_has_answer</span></h4> <a id="transformers.RealmRetriever.block_has_answer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmRetriever.block_has_answer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/retrieval_realm.py#L129" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">concat_inputs<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">answer_ids<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p>check if retrieved_blocks has answers.</p></div></div> <h2 class="relative group"><a id="transformers.RealmEmbedder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RealmEmbedder</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmEmbedder"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmEmbedder</span></span></h3> <a id="transformers.RealmEmbedder" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmEmbedder"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1149" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>The embedder of REALM outputting projected score that will be used to calculate relevance score.
This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmEmbedder.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.RealmEmbedder.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmEmbedder.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1165" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.realm.modeling_realm.RealmEmbedderOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>0 corresponds to a <em>sentence A</em> token,</li>
<li>1 corresponds to a <em>sentence B</em> token.</li>
</ul>
<p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p>
<p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the
model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.RealmEmbedder.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>transformers.models.realm.modeling_realm.RealmEmbedderOutput</code> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <code>transformers.models.realm.modeling_realm.RealmEmbedderOutput</code> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>projected_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.retriever_proj_size)</code>) — Projected score.</p>
</li>
<li>
<p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of
shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.RealmEmbedder.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, RealmEmbedder
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-embedder"</span>)
<span class="hljs-meta">>>> </span>model = RealmEmbedder.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-embedder"</span>)
<span class="hljs-meta">>>> </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span>outputs = model(**inputs)
<span class="hljs-meta">>>> </span>projected_score = outputs.projected_score</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmScorer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RealmScorer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmScorer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmScorer</span></span></h3> <a id="transformers.RealmScorer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmScorer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1231" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">query_embedder<span class="opacity-60"> = None</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.query_embedder" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.query_embedder"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>query_embedder</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a>) —
Embedder for input sequences. If not specified, it will use the same embedder as candidate sequences.</span></span> </li></ul> </div></div> <p>The scorer of REALM outputting relevance scores representing the score of document candidates (before softmax).
This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmScorer.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.RealmScorer.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmScorer.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1247" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">candidate_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">candidate_attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">candidate_token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">candidate_inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.realm.modeling_realm.RealmScorerOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>0 corresponds to a <em>sentence A</em> token,</li>
<li>1 corresponds to a <em>sentence B</em> token.</li>
</ul>
<p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p>
<p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the
model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.candidate_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.candidate_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>candidate_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>) —
Indices of candidate input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.candidate_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.candidate_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>candidate_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.candidate_token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.candidate_token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>candidate_token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>0 corresponds to a <em>sentence A</em> token,</li>
<li>1 corresponds to a <em>sentence B</em> token.</li>
</ul>
<p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.candidate_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.candidate_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>candidate_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size * num_candidates, sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>candidate_input_ids</code> you can choose to directly pass an embedded
representation. This is useful if you want more control over how to convert <em>candidate_input_ids</em> indices
into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li></ul> <div id="transformers.RealmScorer.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>transformers.models.realm.modeling_realm.RealmScorerOutput</code> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <code>transformers.models.realm.modeling_realm.RealmScorerOutput</code> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p>
<ul>
<li><strong>relevance_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_candidates)</code>) — The relevance score of document candidates (before softmax).</li>
<li><strong>query_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.retriever_proj_size)</code>) — Query score derived from the query embedder.</li>
<li><strong>candidate_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_candidates, config.retriever_proj_size)</code>) — Candidate score derived from the embedder.</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmScorer">RealmScorer</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.RealmScorer.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, RealmScorer
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-scorer"</span>)
<span class="hljs-meta">>>> </span>model = RealmScorer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-scorer"</span>, num_candidates=<span class="hljs-number">2</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># batch_size = 2, num_candidates = 2</span>
<span class="hljs-meta">>>> </span>input_texts = [<span class="hljs-string">"How are you?"</span>, <span class="hljs-string">"What is the item in the picture?"</span>]
<span class="hljs-meta">>>> </span>candidates_texts = [[<span class="hljs-string">"Hello world!"</span>, <span class="hljs-string">"Nice to meet you!"</span>], [<span class="hljs-string">"A cute cat."</span>, <span class="hljs-string">"An adorable dog."</span>]]
<span class="hljs-meta">>>> </span>inputs = tokenizer(input_texts, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span>candidates_inputs = tokenizer.batch_encode_candidates(candidates_texts, max_length=<span class="hljs-number">10</span>, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span>outputs = model(
<span class="hljs-meta">... </span> **inputs,
<span class="hljs-meta">... </span> candidate_input_ids=candidates_inputs.input_ids,
<span class="hljs-meta">... </span> candidate_attention_mask=candidates_inputs.attention_mask,
<span class="hljs-meta">... </span> candidate_token_type_ids=candidates_inputs.token_type_ids,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>relevance_score = outputs.relevance_score</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmKnowledgeAugEncoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RealmKnowledgeAugEncoder</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmKnowledgeAugEncoder"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmKnowledgeAugEncoder</span></span></h3> <a id="transformers.RealmKnowledgeAugEncoder" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmKnowledgeAugEncoder"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1379" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>The knowledge-augmented encoder of REALM outputting masked language model logits and marginal log-likelihood loss.
This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmKnowledgeAugEncoder.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.RealmKnowledgeAugEncoder.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmKnowledgeAugEncoder.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1400" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">relevance_score<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mlm_mask<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>0 corresponds to a <em>sentence A</em> token,</li>
<li>1 corresponds to a <em>sentence B</em> token.</li>
</ul>
<p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p>
<p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_candidates, sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the
model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.relevance_score" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.relevance_score"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>relevance_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_candidates)</code>, <em>optional</em>) —
Relevance score derived from RealmScorer, must be specified if you want to compute the masked language
modeling loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Labels for computing the masked language modeling loss. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the
loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.mlm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.mlm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mlm_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid calculating joint loss on certain positions. If not specified, the loss will not be masked.
Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul></span></span> </li></ul> <div id="transformers.RealmKnowledgeAugEncoder.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Masked language modeling (MLM) loss.</p>
</li>
<li>
<p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p>
</li>
<li>
<p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p>
</li>
<li>
<p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder">RealmKnowledgeAugEncoder</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.RealmKnowledgeAugEncoder.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, RealmKnowledgeAugEncoder
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-encoder"</span>)
<span class="hljs-meta">>>> </span>model = RealmKnowledgeAugEncoder.from_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/realm-cc-news-pretrained-encoder"</span>, num_candidates=<span class="hljs-number">2</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># batch_size = 2, num_candidates = 2</span>
<span class="hljs-meta">>>> </span>text = [[<span class="hljs-string">"Hello world!"</span>, <span class="hljs-string">"Nice to meet you!"</span>], [<span class="hljs-string">"The cute cat."</span>, <span class="hljs-string">"The adorable dog."</span>]]
<span class="hljs-meta">>>> </span>inputs = tokenizer.batch_encode_candidates(text, max_length=<span class="hljs-number">10</span>, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span>outputs = model(**inputs)
<span class="hljs-meta">>>> </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmReader" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RealmReader</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmReader"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmReader</span></span></h3> <a id="transformers.RealmReader" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmReader"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1529" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>The reader of REALM.
This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmReader.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.RealmReader.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmReader.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1542" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">relevance_score<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">block_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">has_answers<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.realm.modeling_realm.RealmReaderOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(reader_beam_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(reader_beam_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(reader_beam_size, sequence_length)</code>, <em>optional</em>) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>0 corresponds to a <em>sentence A</em> token,</li>
<li>1 corresponds to a <em>sentence B</em> token.</li>
</ul>
<p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(reader_beam_size, sequence_length)</code>, <em>optional</em>) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p>
<p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(reader_beam_size, sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the
model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.relevance_score" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.relevance_score"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>relevance_score</strong> (<code>torch.FloatTensor</code> of shape <code>(searcher_beam_size,)</code>, <em>optional</em>) —
Relevance score, which must be specified if you want to compute the logits and marginal log loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.block_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.block_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>block_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(searcher_beam_size, sequence_length)</code>, <em>optional</em>) —
The mask of the evidence block, which must be specified if you want to compute the logits and marginal log
loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(searcher_beam_size,)</code>, <em>optional</em>) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence
are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(searcher_beam_size,)</code>, <em>optional</em>) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence
are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.has_answers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.has_answers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>has_answers</strong> (<code>torch.BoolTensor</code> of shape <code>(searcher_beam_size,)</code>, <em>optional</em>) —
Whether or not the evidence block has answer(s).</span></span> </li></ul> <div id="transformers.RealmReader.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>transformers.models.realm.modeling_realm.RealmReaderOutput</code> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <code>transformers.models.realm.modeling_realm.RealmReaderOutput</code> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>start_positions</code>, <code>end_positions</code>, <code>has_answers</code> are provided) — Total loss.</p>
</li>
<li>
<p><strong>retriever_loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>start_positions</code>, <code>end_positions</code>, <code>has_answers</code> are provided) — Retriever loss.</p>
</li>
<li>
<p><strong>reader_loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>start_positions</code>, <code>end_positions</code>, <code>has_answers</code> are provided) — Reader loss.</p>
</li>
<li>
<p><strong>retriever_correct</strong> (<code>torch.BoolTensor</code> of shape <code>(config.searcher_beam_size,)</code>, <em>optional</em>) — Whether or not an evidence block contains answer.</p>
</li>
<li>
<p><strong>reader_correct</strong> (<code>torch.BoolTensor</code> of shape <code>(config.reader_beam_size, num_candidates)</code>, <em>optional</em>) — Whether or not a span candidate contains answer.</p>
</li>
<li>
<p><strong>block_idx</strong> (<code>torch.LongTensor</code> of shape <code>()</code>) — The index of the retrieved evidence block in which the predicted answer is most likely.</p>
</li>
<li>
<p><strong>candidate</strong> (<code>torch.LongTensor</code> of shape <code>()</code>) — The index of the retrieved span candidates in which the predicted answer is most likely.</p>
</li>
<li>
<p><strong>start_pos</strong> (<code>torch.IntTensor</code> of shape <code>()</code>) — Predicted answer starting position in <em>RealmReader</em>’s inputs.</p>
</li>
<li>
<p><strong>end_pos</strong> (<code>torch.IntTensor</code> of shape <code>()</code>) — Predicted answer ending position in <em>RealmReader</em>’s inputs.</p>
</li>
<li>
<p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of
shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmReader">RealmReader</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div></div></div> <h2 class="relative group"><a id="transformers.RealmForOpenQA" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RealmForOpenQA</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmForOpenQA"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmForOpenQA</span></span></h3> <a id="transformers.RealmForOpenQA" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmForOpenQA"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1735" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">retriever<span class="opacity-60"> = None</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p><code>RealmForOpenQA</code> for end-to-end open domain question answering.
This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmForOpenQA.block_embedding_to"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>block_embedding_to</span></h4> <a id="transformers.RealmForOpenQA.block_embedding_to" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmForOpenQA.block_embedding_to"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1758" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">device<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.block_embedding_to.device" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.block_embedding_to.device"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>device</strong> (<code>str</code> or <code>torch.device</code>) —
The device to which <code>self.block_emb</code> will be sent.</span></span> </li></ul> </div></div> <p>Send <code>self.block_emb</code> to a specific device.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmForOpenQA.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.RealmForOpenQA.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmForOpenQA.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/realm/modeling_realm.py#L1768" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">answer_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.realm.modeling_realm.RealmForOpenQAOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(1, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(1, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(1, sequence_length)</code>, <em>optional</em>) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>0 corresponds to a <em>sentence A</em> token,</li>
<li>1 corresponds to a <em>sentence B</em> token (should not be used in this model by design).</li>
</ul>
<p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.answer_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.answer_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>answer_ids</strong> (<code>list</code> of shape <code>(num_answers, answer_length)</code>, <em>optional</em>) —
Answer ids for computing the marginal log-likelihood loss. Indices should be in <code>[-1, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-1</code> are ignored (masked), the
loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.RealmForOpenQA.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>transformers.models.realm.modeling_realm.RealmForOpenQAOutput</code> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <code>transformers.models.realm.modeling_realm.RealmForOpenQAOutput</code> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p>
<ul>
<li><strong>reader_output</strong> (<code>dict</code>) — Reader output.</li>
<li><strong>predicted_answer_ids</strong> (<code>torch.LongTensor</code> of shape <code>(answer_sequence_length)</code>) — Predicted answer ids.</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/realm#transformers.RealmForOpenQA">RealmForOpenQA</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.RealmForOpenQA.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> RealmForOpenQA, RealmRetriever, AutoTokenizer
<span class="hljs-meta">>>> </span>retriever = RealmRetriever.from_pretrained(<span class="hljs-string">"google/realm-orqa-nq-openqa"</span>)
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"google/realm-orqa-nq-openqa"</span>)
<span class="hljs-meta">>>> </span>model = RealmForOpenQA.from_pretrained(<span class="hljs-string">"google/realm-orqa-nq-openqa"</span>, retriever=retriever)
<span class="hljs-meta">>>> </span>question = <span class="hljs-string">"Who is the pioneer in modern computer science?"</span>
<span class="hljs-meta">>>> </span>question_ids = tokenizer([question], return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span>answer_ids = tokenizer(
<span class="hljs-meta">... </span> [<span class="hljs-string">"alan mathison turing"</span>],
<span class="hljs-meta">... </span> add_special_tokens=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> return_token_type_ids=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> return_attention_mask=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span>).input_ids
<span class="hljs-meta">>>> </span>reader_output, predicted_answer_ids = model(**question_ids, answer_ids=answer_ids, return_dict=<span class="hljs-literal">False</span>)
<span class="hljs-meta">>>> </span>predicted_answer = tokenizer.decode(predicted_answer_ids)
<span class="hljs-meta">>>> </span>loss = reader_output.loss</pre></div></div></div></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/model_doc/rag" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>RAG</a>
<a href="/docs/transformers/model_doc/reformer" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Reformer<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"REALM","isExpanded":true,"id":"realm","url":"#realm","sections":[{"title":"Overview","isExpanded":true,"id":"overview","url":"#overview"},{"title":"RealmConfig","isExpanded":true,"id":"transformers.RealmConfig","url":"#transformers.RealmConfig"},{"title":"RealmTokenizer","isExpanded":true,"id":"transformers.RealmTokenizer","url":"#transformers.RealmTokenizer"},{"title":"RealmTokenizerFast","isExpanded":true,"id":"transformers.RealmTokenizerFast","url":"#transformers.RealmTokenizerFast"},{"title":"RealmRetriever","isExpanded":true,"id":"transformers.RealmRetriever","url":"#transformers.RealmRetriever"},{"title":"RealmEmbedder","isExpanded":true,"id":"transformers.RealmEmbedder","url":"#transformers.RealmEmbedder"},{"title":"RealmScorer","isExpanded":true,"id":"transformers.RealmScorer","url":"#transformers.RealmScorer"},{"title":"RealmKnowledgeAugEncoder","isExpanded":true,"id":"transformers.RealmKnowledgeAugEncoder","url":"#transformers.RealmKnowledgeAugEncoder"},{"title":"RealmReader","isExpanded":true,"id":"transformers.RealmReader","url":"#transformers.RealmReader"},{"title":"RealmForOpenQA","isExpanded":true,"id":"transformers.RealmForOpenQA","url":"#transformers.RealmForOpenQA"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#realm" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-realm">REALM</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#transformers.RealmConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmConfig"><wbr>Realm<wbr>Config</a> <a href="#transformers.RealmTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmTokenizer"><wbr>Realm<wbr>Tokenizer</a> <a href="#transformers.RealmTokenizerFast" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmTokenizerFast"><wbr>Realm<wbr>Tokenizer<wbr>Fast</a> <a href="#transformers.RealmRetriever" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmRetriever"><wbr>Realm<wbr>Retriever</a> <a href="#transformers.RealmEmbedder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmEmbedder"><wbr>Realm<wbr>Embedder</a> <a href="#transformers.RealmScorer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmScorer"><wbr>Realm<wbr>Scorer</a> <a href="#transformers.RealmKnowledgeAugEncoder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmKnowledgeAugEncoder"><wbr>Realm<wbr>Knowledge<wbr>Aug<wbr>Encoder</a> <a href="#transformers.RealmReader" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmReader"><wbr>Realm<wbr>Reader</a> <a href="#transformers.RealmForOpenQA" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmForOpenQA"><wbr>Realm<wbr>For<wbr>OpenQA</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/model_doc/realm" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/model_doc/realm");
}
</script>
<iframe name="__privateStripeMetricsController0910" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fmodel_doc%2Frealm&title=REALM&referrer=&muid=NA&sid=NA&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:54:59.250Z |
RetriBERT | https://huggingface.co/docs/transformers/model_doc/retribert | Transformers documentation
Natural Language Processing
Performance and scalability
Reinforcement learning models
## [](#overview)Overview
The RetriBERT model was proposed in the blog post [Explain Anything Like I’m Five: A Model for Open Domain Long Form Question Answering](https://yjernite.github.io/lfqa.html). RetriBERT is a small model that uses either a single or pair of BERT encoders with lower-dimension projection for dense semantic indexing of text.
This model was contributed by [yjernite](https://huggingface.co/yjernite). Code to train and use the model can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research-projects/distillation).
## [](#transformers.RetriBertConfig)RetriBertConfig
### class transformers.RetriBertConfig
[](#transformers.RetriBertConfig)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/configuration_retribert.py#L31)
( vocab\_size = 30522hidden\_size = 768num\_hidden\_layers = 8num\_attention\_heads = 12intermediate\_size = 3072hidden\_act = 'gelu'hidden\_dropout\_prob = 0.1attention\_probs\_dropout\_prob = 0.1max\_position\_embeddings = 512type\_vocab\_size = 2initializer\_range = 0.02layer\_norm\_eps = 1e-12share\_encoders = Trueprojection\_dim = 128pad\_token\_id = 0\*\*kwargs )
This is the configuration class to store the configuration of a [RetriBertModel](/docs/transformers/v4.30.0/en/model_doc/retribert#transformers.RetriBertModel). It is used to instantiate a RetriBertModel model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RetriBERT [yjernite/retribert-base-uncased](https://huggingface.co/yjernite/retribert-base-uncased) architecture.
Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information.
## [](#transformers.RetriBertTokenizer)RetriBertTokenizer
### class transformers.RetriBertTokenizer
[](#transformers.RetriBertTokenizer)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert.py#L70)
( vocab\_filedo\_lower\_case = Truedo\_basic\_tokenize = Truenever\_split = Noneunk\_token = '\[UNK\]'sep\_token = '\[SEP\]'pad\_token = '\[PAD\]'cls\_token = '\[CLS\]'mask\_token = '\[MASK\]'tokenize\_chinese\_chars = Truestrip\_accents = None\*\*kwargs )
Constructs a RetriBERT tokenizer.
[RetriBertTokenizer](/docs/transformers/v4.30.0/en/model_doc/retribert#transformers.RetriBertTokenizer) is identical to [BertTokenizer](/docs/transformers/v4.30.0/en/model_doc/bert#transformers.BertTokenizer) and runs end-to-end tokenization: punctuation splitting and wordpiece.
This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to: this superclass for more information regarding those methods.
#### build\_inputs\_with\_special\_tokens
[](#transformers.RetriBertTokenizer.build_inputs_with_special_tokens)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert.py#L211)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]`
Parameters
- [](#transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added.
- [](#transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format:
- single sequence: `[CLS] X [SEP]`
- pair of sequences: `[CLS] A [SEP] B [SEP]`
Converts a sequence of tokens (string) in a single string.
#### create\_token\_type\_ids\_from\_sequences
[](#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert.py#L266)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]`
Parameters
- [](#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs.
- [](#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |```
If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s).
#### get\_special\_tokens\_mask
[](#transformers.RetriBertTokenizer.get_special_tokens_mask)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert.py#L237)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]`
Parameters
- [](#transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs.
- [](#transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
- [](#transformers.RetriBertTokenizer.get_special_tokens_mask.already_has_special_tokens)**already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method.
## [](#transformers.RetriBertTokenizerFast)RetriBertTokenizerFast
### class transformers.RetriBertTokenizerFast
[](#transformers.RetriBertTokenizerFast)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert_fast.py#L54)
( vocab\_file = Nonetokenizer\_file = Nonedo\_lower\_case = Trueunk\_token = '\[UNK\]'sep\_token = '\[SEP\]'pad\_token = '\[PAD\]'cls\_token = '\[CLS\]'mask\_token = '\[MASK\]'tokenize\_chinese\_chars = Truestrip\_accents = None\*\*kwargs )
Construct a “fast” RetriBERT tokenizer (backed by HuggingFace’s _tokenizers_ library).
[RetriBertTokenizerFast](/docs/transformers/v4.30.0/en/model_doc/retribert#transformers.RetriBertTokenizerFast) is identical to [BertTokenizerFast](/docs/transformers/v4.30.0/en/model_doc/bert#transformers.BertTokenizerFast) and runs end-to-end tokenization: punctuation splitting and wordpiece.
This tokenizer inherits from [PreTrainedTokenizerFast](/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
#### build\_inputs\_with\_special\_tokens
[](#transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert_fast.py#L148)
( token\_ids\_0token\_ids\_1 = None ) → `List[int]`
Parameters
- [](#transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added.
- [](#transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format:
- single sequence: `[CLS] X [SEP]`
- pair of sequences: `[CLS] A [SEP] B [SEP]`
#### create\_token\_type\_ids\_from\_sequences
[](#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert_fast.py#L173)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]`
Parameters
- [](#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs.
- [](#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |```
If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s).
## [](#transformers.RetriBertModel)RetriBertModel
### class transformers.RetriBertModel
[](#transformers.RetriBertModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/modeling_retribert.py#L88)
( config: RetriBertConfig )
Parameters
- [](#transformers.RetriBertModel.config)**config** ([RetriBertConfig](/docs/transformers/v4.30.0/en/model_doc/retribert#transformers.RetriBertConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
Bert Based model to embed queries or document for document retrieval.
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.RetriBertModel.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/modeling_retribert.py#L176)
( input\_ids\_query: LongTensorattention\_mask\_query: typing.Optional\[torch.FloatTensor\]input\_ids\_doc: LongTensorattention\_mask\_doc: typing.Optional\[torch.FloatTensor\]checkpoint\_batch\_size: int = -1 ) → \`torch.FloatTensor“ | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="RetriBERT">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/model_doc/retribert">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>RetriBERT</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"retribert","sections":[{"local":"overview","title":"Overview"},{"local":"transformers.RetriBertConfig","title":"RetriBertConfig"},{"local":"transformers.RetriBertTokenizer","title":"RetriBertTokenizer"},{"local":"transformers.RetriBertTokenizerFast","title":"RetriBertTokenizerFast"},{"local":"transformers.RetriBertModel","title":"RetriBertModel"}],"title":"RetriBERT"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":true,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","isExpanded":true,"id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"model_doc/retribert","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"RetriBERT"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">RetriBERT</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/albert">ALBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bart">BART </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/barthez">BARThez </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bartpho">BARTpho </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bert">BERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bert-generation">BertGeneration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bert-japanese">BertJapanese </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bertweet">Bertweet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/big_bird">BigBird </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bigbird_pegasus">BigBirdPegasus </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/biogpt">BioGpt </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blenderbot">Blenderbot </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blenderbot-small">Blenderbot Small </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bloom">BLOOM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bort">BORT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/byt5">ByT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/camembert">CamemBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/canine">CANINE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/codegen">CodeGen </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/convbert">ConvBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/cpm">CPM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/cpmant">CPMANT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ctrl">CTRL </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/deberta">DeBERTa </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/deberta-v2">DeBERTa-v2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/dialogpt">DialoGPT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/distilbert">DistilBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/dpr">DPR </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/electra">ELECTRA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/encoder-decoder">Encoder Decoder Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ernie">ERNIE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ernie_m">ErnieM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/esm">ESM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flan-t5">FLAN-T5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flan-ul2">FLAN-UL2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flaubert">FlauBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/fnet">FNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/fsmt">FSMT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/funnel">Funnel Transformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/openai-gpt">GPT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_neo">GPT Neo </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_neox">GPT NeoX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gptj">GPT-J </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt2">GPT2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_bigcode">GPTBigCode </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gptsan-japanese">GPTSAN Japanese </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt-sw3">GPTSw3 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/herbert">HerBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ibert">I-BERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/jukebox">Jukebox </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/led">LED </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/llama">LLaMA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/longformer">Longformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/longt5">LongT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/luke">LUKE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/m2m_100">M2M100 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/marian">MarianMT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/markuplm">MarkupLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mbart">MBart and MBart-50 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mega">MEGA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/megatron-bert">MegatronBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/megatron_gpt2">MegatronGPT2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mluke">mLUKE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mobilebert">MobileBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mpnet">MPNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mt5">MT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mvp">MVP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nezha">NEZHA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nllb">NLLB </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nllb-moe">NLLB-MoE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nystromformer">Nyströmformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/open-llama">Open-Llama </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/opt">OPT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/pegasus">Pegasus </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/pegasus_x">PEGASUS-X </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/phobert">PhoBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/plbart">PLBart </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/prophetnet">ProphetNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/qdqbert">QDQBert </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/rag">RAG </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/realm">REALM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/reformer">Reformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/rembert">RemBERT </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/model_doc/retribert">RetriBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roberta">RoBERTa </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roc_bert">RoCBert </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roformer">RoFormer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/rwkv">RWKV </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/splinter">Splinter </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/squeezebert">SqueezeBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/switch_transformers">SwitchTransformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/t5">T5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/t5v1.1">T5v1.1 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/tapex">TAPEX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/transfo-xl">Transformer XL </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ul2">UL2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xmod">X-MOD </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xglm">XGLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm">XLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-roberta">XLM-RoBERTa </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-v">XLM-V </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlnet">XLNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="retribert" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#retribert"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RetriBERT</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Overview</span></h2> <p>The RetriBERT model was proposed in the blog post <a href="https://yjernite.github.io/lfqa.html" rel="nofollow">Explain Anything Like I’m Five: A Model for Open Domain Long Form
Question Answering</a>. RetriBERT is a small model that uses either a single or
pair of BERT encoders with lower-dimension projection for dense semantic indexing of text.</p> <p>This model was contributed by <a href="https://huggingface.co/yjernite" rel="nofollow">yjernite</a>. Code to train and use the model can be
found <a href="https://github.com/huggingface/transformers/tree/main/examples/research-projects/distillation" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="transformers.RetriBertConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RetriBertConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RetriBertConfig</span></span></h3> <a id="transformers.RetriBertConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/configuration_retribert.py#L31" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 30522</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 8</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">type_vocab_size<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">share_encoders<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projection_dim<span class="opacity-60"> = 128</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 30522) —
Vocabulary size of the RetriBERT model. Defines the number of different tokens that can be represented by
the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.30.0/en/model_doc/retribert#transformers.RetriBertModel">RetriBertModel</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) —
Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) —
The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>,
<code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) —
The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.type_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.type_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>type_vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2) —
The vocabulary size of the <em>token_type_ids</em> passed into <a href="/docs/transformers/v4.30.0/en/model_doc/bert#transformers.BertModel">BertModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) —
The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.share_encoders" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.share_encoders"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>share_encoders</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to use the same Bert-type encoder for the queries and document</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.projection_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.projection_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projection_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 128) —
Final dimension of the query and document representation after projection</span></span> </li></ul> </div></div> <p>This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.30.0/en/model_doc/retribert#transformers.RetriBertModel">RetriBertModel</a>. It is used to instantiate a
RetriBertModel model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the RetriBERT
<a href="https://huggingface.co/yjernite/retribert-base-uncased" rel="nofollow">yjernite/retribert-base-uncased</a> architecture.</p> <p>Configuration objects inherit from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the
documentation from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p></div> <h2 class="relative group"><a id="transformers.RetriBertTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RetriBertTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RetriBertTokenizer</span></span></h3> <a id="transformers.RetriBertTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert.py#L70" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_basic_tokenize<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">never_split<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenize_chinese_chars<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">strip_accents<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) —
File containing the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to lowercase the input when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.do_basic_tokenize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.do_basic_tokenize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_basic_tokenize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to do basic tokenization before WordPiece.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.never_split" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.never_split"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>never_split</strong> (<code>Iterable</code>, <em>optional</em>) —
Collection of tokens which will never be split during tokenization. Only has an effect when
<code>do_basic_tokenize=True</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[UNK]"</code>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[SEP]"</code>) —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[PAD]"</code>) —
The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[CLS]"</code>) —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[MASK]"</code>) —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.tokenize_chinese_chars" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.tokenize_chinese_chars"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenize_chinese_chars</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
<a href="https://github.com/huggingface/transformers/issues/328" rel="nofollow">issue</a>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.strip_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.strip_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>strip_accents</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for <code>lowercase</code> (as in the original BERT).</span></span> </li></ul> </div></div> <p>Constructs a RetriBERT tokenizer.</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/retribert#transformers.RetriBertTokenizer">RetriBertTokenizer</a> is identical to <a href="/docs/transformers/v4.30.0/en/model_doc/bert#transformers.BertTokenizer">BertTokenizer</a> and runs end-to-end tokenization: punctuation splitting
and wordpiece.</p> <p>This tokenizer inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer
to: this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert.py#L211" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p>
</p> </div></div> <p>Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:</p> <ul><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer.convert_tokens_to_string"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>convert_tokens_to_string</span></h4> <a id="transformers.RetriBertTokenizer.convert_tokens_to_string" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer.convert_tokens_to_string"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert.py#L205" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokens<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p>Converts a sequence of tokens (string) in a single string.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert.py#L266" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p>
</p> </div></div> <p>Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence</p> <div class="relative group rounded-md"><a id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1
| first sequence | second sequence |</pre></div></div> <p>If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.RetriBertTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert.py#L237" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.RetriBertTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p>
</p> </div></div> <p>Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div></div> <h2 class="relative group"><a id="transformers.RetriBertTokenizerFast" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RetriBertTokenizerFast</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizerFast"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RetriBertTokenizerFast</span></span></h3> <a id="transformers.RetriBertTokenizerFast" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizerFast"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert_fast.py#L54" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenize_chinese_chars<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">strip_accents<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) —
File containing the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to lowercase the input when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[UNK]"</code>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[SEP]"</code>) —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[PAD]"</code>) —
The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[CLS]"</code>) —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[MASK]"</code>) —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.clean_text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.clean_text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>clean_text</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.tokenize_chinese_chars" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.tokenize_chinese_chars"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenize_chinese_chars</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see <a href="https://github.com/huggingface/transformers/issues/328" rel="nofollow">this
issue</a>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.strip_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.strip_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>strip_accents</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for <code>lowercase</code> (as in the original BERT).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.wordpieces_prefix" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.wordpieces_prefix"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>wordpieces_prefix</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"##"</code>) —
The prefix for subwords.</span></span> </li></ul> </div></div> <p>Construct a “fast” RetriBERT tokenizer (backed by HuggingFace’s <em>tokenizers</em> library).</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/retribert#transformers.RetriBertTokenizerFast">RetriBertTokenizerFast</a> is identical to <a href="/docs/transformers/v4.30.0/en/model_doc/bert#transformers.BertTokenizerFast">BertTokenizerFast</a> and runs end-to-end tokenization: punctuation
splitting and wordpiece.</p> <p>This tokenizer inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a> which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert_fast.py#L148" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60"> = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p>
</p> </div></div> <p>Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:</p> <ul><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/tokenization_retribert_fast.py#L173" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p>
</p> </div></div> <p>Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence</p> <div class="relative group rounded-md"><a id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1
| first sequence | second sequence |</pre></div></div> <p>If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div></div> <h2 class="relative group"><a id="transformers.RetriBertModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>RetriBertModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RetriBertModel</span></span></h3> <a id="transformers.RetriBertModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/modeling_retribert.py#L88" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: RetriBertConfig</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/retribert#transformers.RetriBertConfig">RetriBertConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>Bert Based model to embed queries or document for document retrieval.</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.RetriBertModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/retribert/modeling_retribert.py#L176" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids_query<span class="opacity-60">: LongTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask_query<span class="opacity-60">: typing.Optional[torch.FloatTensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids_doc<span class="opacity-60">: LongTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask_doc<span class="opacity-60">: typing.Optional[torch.FloatTensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">checkpoint_batch_size<span class="opacity-60">: int = -1</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span>`torch.FloatTensor“</span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.input_ids_query" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.input_ids_query"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids_query</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary for the queries in a batch.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.attention_mask_query" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.attention_mask_query"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask_query</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.input_ids_doc" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.input_ids_doc"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids_doc</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary for the documents in a batch.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.attention_mask_doc" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.attention_mask_doc"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask_doc</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on documents padding token indices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.checkpoint_batch_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.checkpoint_batch_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>checkpoint_batch_size</strong> (<code>int</code>, <em>optional</em>, defaults to <code>-1</code>) —
If greater than 0, uses gradient checkpointing to only compute sequence representation on
<code>checkpoint_batch_size</code> examples at a time on the GPU. All query representations are still compared to
all document representations in the batch.</span></span> </li></ul> <div id="transformers.RetriBertModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p>`torch.FloatTensor“</p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>The bidirectional cross-entropy loss obtained while trying to match each query to its
corresponding document and each document to its corresponding query in the batch</p>
</p> </div></div></div></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/model_doc/rembert" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>RemBERT</a>
<a href="/docs/transformers/model_doc/roberta" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">RoBERTa<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"RetriBERT","isExpanded":true,"id":"retribert","url":"#retribert","sections":[{"title":"Overview","isExpanded":true,"id":"overview","url":"#overview"},{"title":"RetriBertConfig","isExpanded":true,"id":"transformers.RetriBertConfig","url":"#transformers.RetriBertConfig"},{"title":"RetriBertTokenizer","isExpanded":true,"id":"transformers.RetriBertTokenizer","url":"#transformers.RetriBertTokenizer"},{"title":"RetriBertTokenizerFast","isExpanded":true,"id":"transformers.RetriBertTokenizerFast","url":"#transformers.RetriBertTokenizerFast"},{"title":"RetriBertModel","isExpanded":true,"id":"transformers.RetriBertModel","url":"#transformers.RetriBertModel"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#retribert" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-retribert"><wbr>RetriBERT</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#transformers.RetriBertConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RetriBertConfig"><wbr>Retri<wbr>Bert<wbr>Config</a> <a href="#transformers.RetriBertTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RetriBertTokenizer"><wbr>Retri<wbr>Bert<wbr>Tokenizer</a> <a href="#transformers.RetriBertTokenizerFast" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RetriBertTokenizerFast"><wbr>Retri<wbr>Bert<wbr>Tokenizer<wbr>Fast</a> <a href="#transformers.RetriBertModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RetriBertModel"><wbr>Retri<wbr>Bert<wbr>Model</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/model_doc/retribert" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/model_doc/retribert");
}
</script>
<iframe name="__privateStripeMetricsController1920" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fmodel_doc%2Fretribert&title=RetriBERT&referrer=&muid=NA&sid=NA&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:54:59.479Z |
FSMT | https://huggingface.co/docs/transformers/model_doc/fsmt | **DISCLAIMER:** If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) and assign @stas00.
## [](#overview)Overview
FSMT (FairSeq MachineTranslation) models were introduced in [Facebook FAIR’s WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616) by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov.
The abstract of the paper is the following:
_This paper describes Facebook FAIR’s submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT’18 submission by 4.5 BLEU points._
This model was contributed by [stas](https://huggingface.co/stas). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/wmt19).
## [](#implementation-notes)Implementation Notes
- FSMT uses source and target vocabulary pairs that aren’t combined into one. It doesn’t share embeddings tokens either. Its tokenizer is very similar to [XLMTokenizer](/docs/transformers/v4.30.0/en/model_doc/xlm#transformers.XLMTokenizer) and the main model is derived from [BartModel](/docs/transformers/v4.30.0/en/model_doc/bart#transformers.BartModel).
## [](#transformers.FSMTConfig)FSMTConfig
### class transformers.FSMTConfig
[](#transformers.FSMTConfig)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/configuration_fsmt.py#L41)
( langs = \['en', 'de'\]src\_vocab\_size = 42024tgt\_vocab\_size = 42024activation\_function = 'relu'd\_model = 1024max\_length = 200max\_position\_embeddings = 1024encoder\_ffn\_dim = 4096encoder\_layers = 12encoder\_attention\_heads = 16encoder\_layerdrop = 0.0decoder\_ffn\_dim = 4096decoder\_layers = 12decoder\_attention\_heads = 16decoder\_layerdrop = 0.0attention\_dropout = 0.0dropout = 0.1activation\_dropout = 0.0init\_std = 0.02decoder\_start\_token\_id = 2is\_encoder\_decoder = Truescale\_embedding = Truetie\_word\_embeddings = Falsenum\_beams = 5length\_penalty = 1.0early\_stopping = Falseuse\_cache = Truepad\_token\_id = 1bos\_token\_id = 0eos\_token\_id = 2forced\_eos\_token\_id = 2\*\*common\_kwargs )
This is the configuration class to store the configuration of a [FSMTModel](/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTModel). It is used to instantiate a FSMT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FSMT [facebook/wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) architecture.
Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information.
Examples:
```
>>> from transformers import FSMTConfig, FSMTModel
>>>
>>> config = FSMTConfig()
>>>
>>> model = FSMTModel(config)
>>>
>>> configuration = model.config```
#### to\_dict
[](#transformers.FSMTConfig.to_dict)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/configuration_fsmt.py#L220)
( ) → `Dict[str, any]`
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default _to\_dict()_ from _PretrainedConfig_.
## [](#transformers.FSMTTokenizer)FSMTTokenizer
### class transformers.FSMTTokenizer
[](#transformers.FSMTTokenizer)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L135)
( langs = Nonesrc\_vocab\_file = Nonetgt\_vocab\_file = Nonemerges\_file = Nonedo\_lower\_case = Falseunk\_token = '<unk>'bos\_token = '<s>'sep\_token = '</s>'pad\_token = '<pad>'\*\*kwargs )
Construct an FAIRSEQ Transformer tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following:
- Moses preprocessing and tokenization.
- Normalizing all inputs text.
- The arguments `special_tokens` and the function `set_special_tokens`, can be used to add additional symbols (like ”**classify**”) to a vocabulary.
- The argument `langs` defines a pair of languages.
This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
#### build\_inputs\_with\_special\_tokens
[](#transformers.FSMTTokenizer.build_inputs_with_special_tokens)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L404)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]`
Parameters
- [](#transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added.
- [](#transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A FAIRSEQ Transformer sequence has the following format:
- single sequence: `<s> X </s>`
- pair of sequences: `<s> A </s> B </s>`
#### get\_special\_tokens\_mask
[](#transformers.FSMTTokenizer.get_special_tokens_mask)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L430)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]`
Parameters
- [](#transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs.
- [](#transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
- [](#transformers.FSMTTokenizer.get_special_tokens_mask.already_has_special_tokens)**already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method.
#### create\_token\_type\_ids\_from\_sequences
[](#transformers.FSMTTokenizer.create_token_type_ids_from_sequences)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L458)
( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]`
Parameters
- [](#transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_0)**token\_ids\_0** (`List[int]`) — List of IDs.
- [](#transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_1)**token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs.
List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A FAIRSEQ
Transformer sequence pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |```
If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s).
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An FAIRSEQ\_TRANSFORMER sequence pair mask has the following format:
#### save\_vocabulary
[](#transformers.FSMTTokenizer.save_vocabulary)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L491)
( save\_directory: strfilename\_prefix: typing.Optional\[str\] = None )
## [](#transformers.FSMTModel)FSMTModel
### class transformers.FSMTModel
[](#transformers.FSMTModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/modeling_fsmt.py#L1036)
( config: FSMTConfig )
Parameters
- [](#transformers.FSMTModel.config)**config** ([FSMTConfig](/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
The bare FSMT Model outputting raw hidden-states without any specific head on top.
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.FSMTModel.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/modeling_fsmt.py#L1058)
( input\_ids: LongTensorattention\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.LongTensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_head\_mask: typing.Optional\[torch.Tensor\] = Nonecross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonedecoder\_inputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.Seq2SeqModelOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput) or `tuple(torch.FloatTensor)`
The [FSMTModel](/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
```
>>> from transformers import AutoTokenizer, FSMTModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/wmt19-ru-en")
>>> model = FSMTModel.from_pretrained("facebook/wmt19-ru-en")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state```
## [](#transformers.FSMTForConditionalGeneration)FSMTForConditionalGeneration
### class transformers.FSMTForConditionalGeneration
[](#transformers.FSMTForConditionalGeneration)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/modeling_fsmt.py#L1172)
( config: FSMTConfig )
Parameters
- [](#transformers.FSMTForConditionalGeneration.config)**config** ([FSMTConfig](/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
The FSMT Model with a language modeling head. Can be used for summarization.
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.FSMTForConditionalGeneration.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/modeling_fsmt.py#L1192)
( input\_ids: LongTensorattention\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.LongTensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_head\_mask: typing.Optional\[torch.Tensor\] = Nonecross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonedecoder\_inputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.Seq2SeqLMOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput) or `tuple(torch.FloatTensor)`
The [FSMTForConditionalGeneration](/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTForConditionalGeneration) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Translation example::
```
>>> from transformers import AutoTokenizer, FSMTForConditionalGeneration
>>> mname = "facebook/wmt19-ru-en"
>>> model = FSMTForConditionalGeneration.from_pretrained(mname)
>>> tokenizer = AutoTokenizer.from_pretrained(mname)
>>> src_text = "Машинное обучение - это здорово, не так ли?"
>>> input_ids = tokenizer(src_text, return_tensors="pt").input_ids
>>> outputs = model.generate(input_ids, num_beams=5, num_return_sequences=3)
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
"Machine learning is great, isn't it?"``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="FSMT">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/model_doc/fsmt">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>FSMT</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"fsmt","sections":[{"local":"overview","title":"Overview"},{"local":"implementation-notes","title":"Implementation Notes"},{"local":"transformers.FSMTConfig","title":"FSMTConfig"},{"local":"transformers.FSMTTokenizer","title":"FSMTTokenizer"},{"local":"transformers.FSMTModel","title":"FSMTModel"},{"local":"transformers.FSMTForConditionalGeneration","title":"FSMTForConditionalGeneration"}],"title":"FSMT"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":true,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","isExpanded":true,"id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"model_doc/fsmt","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"FSMT"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">FSMT</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/albert">ALBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bart">BART </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/barthez">BARThez </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bartpho">BARTpho </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bert">BERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bert-generation">BertGeneration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bert-japanese">BertJapanese </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bertweet">Bertweet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/big_bird">BigBird </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bigbird_pegasus">BigBirdPegasus </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/biogpt">BioGpt </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blenderbot">Blenderbot </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blenderbot-small">Blenderbot Small </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bloom">BLOOM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bort">BORT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/byt5">ByT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/camembert">CamemBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/canine">CANINE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/codegen">CodeGen </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/convbert">ConvBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/cpm">CPM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/cpmant">CPMANT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ctrl">CTRL </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/deberta">DeBERTa </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/deberta-v2">DeBERTa-v2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/dialogpt">DialoGPT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/distilbert">DistilBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/dpr">DPR </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/electra">ELECTRA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/encoder-decoder">Encoder Decoder Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ernie">ERNIE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ernie_m">ErnieM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/esm">ESM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flan-t5">FLAN-T5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flan-ul2">FLAN-UL2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flaubert">FlauBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/fnet">FNet </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/model_doc/fsmt">FSMT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/funnel">Funnel Transformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/openai-gpt">GPT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_neo">GPT Neo </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_neox">GPT NeoX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gptj">GPT-J </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt2">GPT2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt_bigcode">GPTBigCode </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gptsan-japanese">GPTSAN Japanese </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/gpt-sw3">GPTSw3 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/herbert">HerBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ibert">I-BERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/jukebox">Jukebox </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/led">LED </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/llama">LLaMA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/longformer">Longformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/longt5">LongT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/luke">LUKE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/m2m_100">M2M100 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/marian">MarianMT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/markuplm">MarkupLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mbart">MBart and MBart-50 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mega">MEGA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/megatron-bert">MegatronBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/megatron_gpt2">MegatronGPT2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mluke">mLUKE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mobilebert">MobileBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mpnet">MPNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mt5">MT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mvp">MVP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nezha">NEZHA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nllb">NLLB </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nllb-moe">NLLB-MoE </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/nystromformer">Nyströmformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/open-llama">Open-Llama </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/opt">OPT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/pegasus">Pegasus </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/pegasus_x">PEGASUS-X </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/phobert">PhoBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/plbart">PLBart </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/prophetnet">ProphetNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/qdqbert">QDQBert </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/rag">RAG </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/realm">REALM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/reformer">Reformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/rembert">RemBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/retribert">RetriBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roberta">RoBERTa </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roc_bert">RoCBert </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/roformer">RoFormer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/rwkv">RWKV </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/splinter">Splinter </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/squeezebert">SqueezeBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/switch_transformers">SwitchTransformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/t5">T5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/t5v1.1">T5v1.1 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/tapex">TAPEX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/transfo-xl">Transformer XL </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/ul2">UL2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xmod">X-MOD </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xglm">XGLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm">XLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-roberta">XLM-RoBERTa </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlm-v">XLM-V </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlnet">XLNet </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="fsmt" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#fsmt"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>FSMT</span></h1> <p><strong>DISCLAIMER:</strong> If you see something strange, file a <a href="https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title" rel="nofollow">Github Issue</a> and assign
@stas00.</p> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Overview</span></h2> <p>FSMT (FairSeq MachineTranslation) models were introduced in <a href="https://arxiv.org/abs/1907.06616" rel="nofollow">Facebook FAIR’s WMT19 News Translation Task Submission</a> by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov.</p> <p>The abstract of the paper is the following:</p> <p><em>This paper describes Facebook FAIR’s submission to the WMT19 shared news translation task. We participate in two
language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from
last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling
toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes,
as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific
data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the
human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations.
This system improves upon our WMT’18 submission by 4.5 BLEU points.</em></p> <p>This model was contributed by <a href="https://huggingface.co/stas" rel="nofollow">stas</a>. The original code can be found
<a href="https://github.com/pytorch/fairseq/tree/master/examples/wmt19" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="implementation-notes" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#implementation-notes"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Implementation Notes</span></h2> <ul><li>FSMT uses source and target vocabulary pairs that aren’t combined into one. It doesn’t share embeddings tokens
either. Its tokenizer is very similar to <a href="/docs/transformers/v4.30.0/en/model_doc/xlm#transformers.XLMTokenizer">XLMTokenizer</a> and the main model is derived from
<a href="/docs/transformers/v4.30.0/en/model_doc/bart#transformers.BartModel">BartModel</a>.</li></ul> <h2 class="relative group"><a id="transformers.FSMTConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>FSMTConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FSMTConfig</span></span></h3> <a id="transformers.FSMTConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/configuration_fsmt.py#L41" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60"> = ['en', 'de']</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">src_vocab_size<span class="opacity-60"> = 42024</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tgt_vocab_size<span class="opacity-60"> = 42024</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_function<span class="opacity-60"> = 'relu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">d_model<span class="opacity-60"> = 1024</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_length<span class="opacity-60"> = 200</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 1024</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_ffn_dim<span class="opacity-60"> = 4096</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_heads<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_layerdrop<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_ffn_dim<span class="opacity-60"> = 4096</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_heads<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_layerdrop<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">init_std<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_start_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_encoder_decoder<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">scale_embedding<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tie_word_embeddings<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_beams<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">length_penalty<span class="opacity-60"> = 1.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">early_stopping<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">forced_eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**common_kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 30 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>List[str]</code>) —
A list with source language and target_language (e.g., [‘en’, ‘ru’]).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.src_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.src_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>src_vocab_size</strong> (<code>int</code>) —
Vocabulary size of the encoder. Defines the number of different tokens that can be represented by the
<code>inputs_ids</code> passed to the forward method in the encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.tgt_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.tgt_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tgt_vocab_size</strong> (<code>int</code>) —
Vocabulary size of the decoder. Defines the number of different tokens that can be represented by the
<code>inputs_ids</code> passed to the forward method in the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.d_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.d_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>d_model</strong> (<code>int</code>, <em>optional</em>, defaults to 1024) —
Dimensionality of the layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.encoder_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.encoder_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) —
Number of encoder layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) —
Number of decoder layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.encoder_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.encoder_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.encoder_ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.encoder_ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.activation_function" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.activation_function"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_function</strong> (<code>str</code> or <code>Callable</code>, <em>optional</em>, defaults to <code>"relu"</code>) —
The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>,
<code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) —
The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.init_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.init_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>init_std</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.scale_embedding" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.scale_embedding"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>scale_embedding</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Scale embeddings by diving by sqrt(d_model).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.bos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.bos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 0) —
Beginning of stream token id.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.pad_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.pad_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 1) —
Padding token id.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.eos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.eos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 2) —
End of stream token id.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_start_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_start_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_start_token_id</strong> (<code>int</code>, <em>optional</em>) —
This model starts decoding with <code>eos_token_id</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.encoder_layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.encoder_layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) —
Google “layerdrop arxiv”, as its not explainable in one line.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) —
Google “layerdrop arxiv”, as its not explainable in one line.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.is_encoder_decoder" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.is_encoder_decoder"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_encoder_decoder</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether this is an encoder/decoder model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.tie_word_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.tie_word_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tie_word_embeddings</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Whether to tie input and output embeddings.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.num_beams" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.num_beams"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_beams</strong> (<code>int</code>, <em>optional</em>, defaults to 5) —
Number of beams for beam search that will be used by default in the <code>generate</code> method of the model. 1 means
no beam search.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.length_penalty" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.length_penalty"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>length_penalty</strong> (<code>float</code>, <em>optional</em>, defaults to 1) —
Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to
the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log
likelihood of the sequence (i.e. negative), <code>length_penalty</code> > 0.0 promotes longer sequences, while
<code>length_penalty</code> < 0.0 encourages shorter sequences.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.early_stopping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.early_stopping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>early_stopping</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Flag that will be used by default in the <code>generate</code> method of the model. Whether to stop the beam search
when at least <code>num_beams</code> sentences are finished per batch or not.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether or not the model should return the last key/values attentions (not used by all models).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.forced_eos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.forced_eos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>forced_eos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 2) —
The id of the token to force as the last generated token when <code>max_length</code> is reached. Usually set to
<code>eos_token_id</code>.</span></span> </li></ul> </div></div> <p>This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTModel">FSMTModel</a>. It is used to instantiate a FSMT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the FSMT
<a href="https://huggingface.co/facebook/wmt19-en-ru" rel="nofollow">facebook/wmt19-en-ru</a> architecture.</p> <p>Configuration objects inherit from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the
documentation from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.FSMTConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FSMTConfig, FSMTModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a FSMT facebook/wmt19-en-ru style configuration</span>
<span class="hljs-meta">>>> </span>config = FSMTConfig()
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a model (with random weights) from the configuration</span>
<span class="hljs-meta">>>> </span>model = FSMTModel(config)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Accessing the model configuration</span>
<span class="hljs-meta">>>> </span>configuration = model.config</pre></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTConfig.to_dict"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>to_dict</span></h4> <a id="transformers.FSMTConfig.to_dict" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTConfig.to_dict"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/configuration_fsmt.py#L220" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>Dict[str, any]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.FSMTConfig.to_dict.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>Dict[str, any]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>Dictionary of all the attributes that make up this configuration instance,</p>
</p> </div></div> <p>Serializes this instance to a Python dictionary. Override the default <em>to_dict()</em> from <em>PretrainedConfig</em>.</p></div></div> <h2 class="relative group"><a id="transformers.FSMTTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>FSMTTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FSMTTokenizer</span></span></h3> <a id="transformers.FSMTTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L135" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">src_vocab_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tgt_vocab_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">merges_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '<unk>'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '<s>'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '</s>'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '<pad>'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>List[str]</code>) —
A list of two languages to translate from and to, for instance <code>["en", "ru"]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.src_vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.src_vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>src_vocab_file</strong> (<code>str</code>) —
File containing the vocabulary for the source language.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.tgt_vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.tgt_vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tgt_vocab_file</strong> (<code>st</code>) —
File containing the vocabulary for the target language.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.merges_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.merges_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>merges_file</strong> (<code>str</code>) —
File containing the merges.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Whether or not to lowercase the input when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"<unk>"</code>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"<s>"</code>) —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p>
<div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">
<p>When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the <code>cls_token</code>.</p>
</div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"</s>"</code>) —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"<pad>"</code>) —
The token used for padding, for example when batching sequences of different lengths.</span></span> </li></ul> </div></div> <p>Construct an FAIRSEQ Transformer tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following:</p> <ul><li>Moses preprocessing and tokenization.</li> <li>Normalizing all inputs text.</li> <li>The arguments <code>special_tokens</code> and the function <code>set_special_tokens</code>, can be used to add additional symbols (like
”<strong>classify</strong>”) to a vocabulary.</li> <li>The argument <code>langs</code> defines a pair of languages.</li></ul> <p>This tokenizer inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.FSMTTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L404" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.FSMTTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p>
</p> </div></div> <p>Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A FAIRSEQ Transformer sequence has the following format:</p> <ul><li>single sequence: <code><s> X </s></code></li> <li>pair of sequences: <code><s> A </s> B </s></code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.FSMTTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L430" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.FSMTTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p>
</p> </div></div> <p>Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L458" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) —
List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) —
Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>List[int]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p>
</p> </div></div> <p>Create a mask from the two sequences passed to be used in a sequence-pair classification task. A FAIRSEQ</p> <div class="relative group rounded-md"><a id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Transformer sequence pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1
| first sequence | second sequence |</pre></div></div> <p>If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p> <p>Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An
FAIRSEQ_TRANSFORMER sequence pair mask has the following format:</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.FSMTTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/tokenization_fsmt.py#L491" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.FSMTModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>FSMTModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FSMTModel</span></span></h3> <a id="transformers.FSMTModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/modeling_fsmt.py#L1036" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: FSMTConfig</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTConfig">FSMTConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>The bare FSMT Model outputting raw hidden-states without any specific head on top.</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.FSMTModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/modeling_fsmt.py#L1058" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: LongTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput">transformers.modeling_outputs.Seq2SeqModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 15 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <code>FSTMTokenizer</code>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Indices of decoder input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p>
<p>FSMT uses the <code>eos_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code>
is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also
be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>Tuple(torch.FloatTensor)</code>, <em>optional</em>) —
Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>)
<code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code> is a sequence of hidden-states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>Tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that
don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all
<code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the
model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded
representation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_inputs_embeds</code> have to be
input (see <code>past_key_values</code>). This is useful if you want more control over how to convert
<code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<p></p>
<p>If <code>decoder_input_ids</code> and <code>decoder_inputs_embeds</code> are both unset, <code>decoder_inputs_embeds</code> takes the value
of <code>inputs_embeds</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see
<code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FSMTModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput">transformers.modeling_outputs.Seq2SeqModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput">transformers.modeling_outputs.Seq2SeqModelOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTConfig">FSMTConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the decoder of the model.</p>
<p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p>
</li>
<li>
<p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape
<code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape
<code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p>
<p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p>
</li>
<li>
<p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.</p>
</li>
<li>
<p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
<li>
<p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.</p>
</li>
<li>
<p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p>
</li>
<li>
<p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.</p>
</li>
<li>
<p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTModel">FSMTModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FSMTModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FSMTModel
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/wmt19-ru-en"</span>)
<span class="hljs-meta">>>> </span>model = FSMTModel.from_pretrained(<span class="hljs-string">"facebook/wmt19-ru-en"</span>)
<span class="hljs-meta">>>> </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span>outputs = model(**inputs)
<span class="hljs-meta">>>> </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FSMTForConditionalGeneration" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>FSMTForConditionalGeneration</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTForConditionalGeneration"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FSMTForConditionalGeneration</span></span></h3> <a id="transformers.FSMTForConditionalGeneration" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTForConditionalGeneration"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/modeling_fsmt.py#L1172" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: FSMTConfig</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTConfig">FSMTConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>The FSMT Model with a language modeling head. Can be used for summarization.</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTForConditionalGeneration.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.FSMTForConditionalGeneration.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTForConditionalGeneration.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/fsmt/modeling_fsmt.py#L1192" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: LongTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 16 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <code>FSTMTokenizer</code>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Indices of decoder input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p>
<p>FSMT uses the <code>eos_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code>
is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also
be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 indicates the head is <strong>not masked</strong>,</li>
<li>0 indicates the head is <strong>masked</strong>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>Tuple(torch.FloatTensor)</code>, <em>optional</em>) —
Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>)
<code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code> is a sequence of hidden-states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>Tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that
don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all
<code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the
model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded
representation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_inputs_embeds</code> have to be
input (see <code>past_key_values</code>). This is useful if you want more control over how to convert
<code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<p></p>
<p>If <code>decoder_input_ids</code> and <code>decoder_inputs_embeds</code> are both unset, <code>decoder_inputs_embeds</code> takes the value
of <code>inputs_embeds</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see
<code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Labels for computing the masked language modeling loss. Indices should either be in <code>[0, ..., config.vocab_size]</code> or -100 (see <code>input_ids</code> docstring). Tokens with indices set to <code>-100</code> are ignored
(masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code>.</span></span> </li></ul> <div id="transformers.FSMTForConditionalGeneration.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTConfig">FSMTConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</p>
</li>
<li>
<p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p>
</li>
<li>
<p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape
<code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape
<code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p>
<p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p>
</li>
<li>
<p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
<li>
<p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.</p>
</li>
<li>
<p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p>
</li>
<li>
<p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/fsmt#transformers.FSMTForConditionalGeneration">FSMTForConditionalGeneration</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FSMTForConditionalGeneration.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Translation example::</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FSMTForConditionalGeneration
<span class="hljs-meta">>>> </span>mname = <span class="hljs-string">"facebook/wmt19-ru-en"</span>
<span class="hljs-meta">>>> </span>model = FSMTForConditionalGeneration.from_pretrained(mname)
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(mname)
<span class="hljs-meta">>>> </span>src_text = <span class="hljs-string">"Машинное обучение - это здорово, не так ли?"</span>
<span class="hljs-meta">>>> </span>input_ids = tokenizer(src_text, return_tensors=<span class="hljs-string">"pt"</span>).input_ids
<span class="hljs-meta">>>> </span>outputs = model.generate(input_ids, num_beams=<span class="hljs-number">5</span>, num_return_sequences=<span class="hljs-number">3</span>)
<span class="hljs-meta">>>> </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>)
<span class="hljs-string">"Machine learning is great, isn't it?"</span></pre></div></div></div></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/model_doc/fnet" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>FNet</a>
<a href="/docs/transformers/model_doc/funnel" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Funnel Transformer<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"FSMT","isExpanded":true,"id":"fsmt","url":"#fsmt","sections":[{"title":"Overview","isExpanded":true,"id":"overview","url":"#overview"},{"title":"Implementation Notes","isExpanded":true,"id":"implementation-notes","url":"#implementation-notes"},{"title":"FSMTConfig","isExpanded":true,"id":"transformers.FSMTConfig","url":"#transformers.FSMTConfig"},{"title":"FSMTTokenizer","isExpanded":true,"id":"transformers.FSMTTokenizer","url":"#transformers.FSMTTokenizer"},{"title":"FSMTModel","isExpanded":true,"id":"transformers.FSMTModel","url":"#transformers.FSMTModel"},{"title":"FSMTForConditionalGeneration","isExpanded":true,"id":"transformers.FSMTForConditionalGeneration","url":"#transformers.FSMTForConditionalGeneration"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#fsmt" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-fsmt">FSMT</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#implementation-notes" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-implementation-notes"><wbr>Implementation <wbr>Notes</a> <a href="#transformers.FSMTConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FSMTConfig">FSMT<wbr>Config</a> <a href="#transformers.FSMTTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FSMTTokenizer">FSMT<wbr>Tokenizer</a> <a href="#transformers.FSMTModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FSMTModel">FSMT<wbr>Model</a> <a href="#transformers.FSMTForConditionalGeneration" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FSMTForConditionalGeneration">FSMT<wbr>For<wbr>Conditional<wbr>Generation</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/model_doc/fsmt" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/model_doc/fsmt");
}
</script>
<iframe name="__privateStripeMetricsController4690" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fmodel_doc%2Ffsmt&title=FSMT&referrer=&muid=NA&sid=NA&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:54:59.750Z |
Masked language modeling | https://huggingface.co/docs/transformers/tasks/masked_language_modeling | Masked language modeling predicts a masked token in a sequence, and the model can attend to tokens bidirectionally. This means the model has full access to the tokens on the left and right. Masked language modeling is great for tasks that require a good contextual understanding of an entire sequence. BERT is an example of a masked language model.
This guide will show you how to:
1. Finetune [DistilRoBERTa](https://huggingface.co/distilroberta-base) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset.
2. Use your finetuned model for inference.
You can finetune other architectures for masked language modeling following the same steps in this guide. Choose one of the following architectures:
[ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [Perceiver](../model_doc/perceiver), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Wav2Vec2](../model_doc/wav2vec2), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso)
Before you begin, make sure you have all the necessary libraries installed:
```
pip install transformers datasets evaluate```
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
```
>>> from huggingface_hub import notebook_login
>>> notebook_login()```
## [](#load-eli5-dataset)Load ELI5 dataset
Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
```
>>> from datasets import load_dataset
>>> eli5 = load_dataset("eli5", split="train_asks[:5000]")```
Split the dataset’s `train_asks` split into a train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.train_test_split) method:
```
>>> eli5 = eli5.train_test_split(test_size=0.2)```
Then take a look at an example:
```
>>> eli5["train"][0]
{'answers': {'a_id': ['c3d1aib', 'c3d4lya'],
'score': [6, 3],
'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]},
'answers_urls': {'url': []},
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']},
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls': {'url': []}}```
While this may look like a lot, you’re only really interested in the `text` field. What’s cool about language modeling tasks is you don’t need labels (also known as an unsupervised task) because the next word _is_ the label.
## [](#preprocess)Preprocess
For masked language modeling, the next step is to load a DistilRoBERTa tokenizer to process the `text` subfield:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilroberta-base")```
You’ll notice from the example above, the `text` field is actually nested inside `answers`. This means you’ll need to e xtract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process.html#flatten) method:
```
>>> eli5 = eli5.flatten()
>>> eli5["train"][0]
{'answers.a_id': ['c3d1aib', 'c3d4lya'],
'answers.score': [6, 3],
'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"],
'answers_urls.url': [],
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'],
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls.url': []}```
Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.
Here is a first preprocessing function to join the list of strings for each example and tokenize the result:
```
>>> def preprocess_function(examples):
... return tokenizer([" ".join(x) for x in examples["answers.text"]])```
To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don’t need:
```
>>> tokenized_eli5 = eli5.map(
... preprocess_function,
... batched=True,
... num_proc=4,
... remove_columns=eli5["train"].column_names,
... )```
This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.
You can now use a second preprocessing function to
- concatenate all the sequences
- split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM.
```
>>> block_size = 128
>>> def group_texts(examples):
...
... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
... total_length = len(concatenated_examples[list(examples.keys())[0]])
...
...
... if total_length >= block_size:
... total_length = (total_length // block_size) * block_size
...
... result = {
... k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
... for k, t in concatenated_examples.items()
... }
... result["labels"] = result["input_ids"].copy()
... return result```
Apply the `group_texts` function over the entire dataset:
```
>>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)```
Now create a batch of examples using [DataCollatorForLanguageModeling](/docs/transformers/v4.30.0/en/main_classes/data_collator#transformers.DataCollatorForLanguageModeling). It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
Use the end-of-sequence token as the padding token and specify `mlm_probability` to randomly mask tokens each time you iterate over the data:
```
>>> from transformers import DataCollatorForLanguageModeling
>>> tokenizer.pad_token = tokenizer.eos_token
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)```
Use the end-of-sequence token as the padding token and specify `mlm_probability` to randomly mask tokens each time you iterate over the data:
```
>>> from transformers import DataCollatorForLanguageModeling
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf")```
## [](#train)Train
If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!
You’re ready to start training your model now! Load DistilRoBERTa with [AutoModelForMaskedLM](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModelForMaskedLM):
```
>>> from transformers import AutoModelForMaskedLM
>>> model = AutoModelForMaskedLM.from_pretrained("distilroberta-base")```
At this point, only three steps remain:
1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model).
2. Pass the training arguments to [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer) along with the model, datasets, and data collator.
3. Call [train()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model.
```
>>> training_args = TrainingArguments(
... output_dir="my_awesome_eli5_mlm_model",
... evaluation_strategy="epoch",
... learning_rate=2e-5,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=lm_dataset["train"],
... eval_dataset=lm_dataset["test"],
... data_collator=data_collator,
... )
>>> trainer.train()```
Once training is completed, use the [evaluate()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.evaluate) method to evaluate your model and get its perplexity:
```
>>> import math
>>> eval_results = trainer.evaluate()
>>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 8.76```
Then share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model:
```
>>> trainer.push_to_hub()```
If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
```
>>> from transformers import create_optimizer, AdamWeightDecay
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)```
Then you can load DistilRoBERTa with [TFAutoModelForMaskedLM](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.TFAutoModelForMaskedLM):
```
>>> from transformers import TFAutoModelForMaskedLM
>>> model = TFAutoModelForMaskedLM.from_pretrained("distilroberta-base")```
Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset):
```
>>> tf_train_set = model.prepare_tf_dataset(
... lm_dataset["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_test_set = model.prepare_tf_dataset(
... lm_dataset["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )```
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:
```
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer) ```
This can be done by specifying where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback):
```
>>> from transformers.keras_callbacks import PushToHubCallback
>>> callback = PushToHubCallback(
... output_dir="my_awesome_eli5_mlm_model",
... tokenizer=tokenizer,
... )```
Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model:
```
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])```
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
## [](#inference)Inference
Great, now that you’ve finetuned a model, you can use it for inference!
Come up with some text you’d like the model to fill in the blank with, and use the special `<mask>` token to indicate the blank:
```
>>> text = "The Milky Way is a <mask> galaxy."```
The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for fill-mask with your model, and pass your text to it. If you like, you can use the `top_k` parameter to specify how many predictions to return:
```
>>> from transformers import pipeline
>>> mask_filler = pipeline("fill-mask", "stevhliu/my_awesome_eli5_mlm_model")
>>> mask_filler(text, top_k=3)
[{'score': 0.5150994658470154,
'token': 21300,
'token_str': ' spiral',
'sequence': 'The Milky Way is a spiral galaxy.'},
{'score': 0.07087188959121704,
'token': 2232,
'token_str': ' massive',
'sequence': 'The Milky Way is a massive galaxy.'},
{'score': 0.06434620916843414,
'token': 650,
'token_str': ' small',
'sequence': 'The Milky Way is a small galaxy.'}]```
Tokenize the text and return the `input_ids` as PyTorch tensors. You’ll also need to specify the position of the `<mask>` token:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model")
>>> inputs = tokenizer(text, return_tensors="pt")
>>> mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]```
Pass your inputs to the model and return the `logits` of the masked token:
```
>>> from transformers import AutoModelForMaskedLM
>>> model = AutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model")
>>> logits = model(**inputs).logits
>>> mask_token_logits = logits[0, mask_token_index, :]```
Then return the three masked tokens with the highest probability and print them out:
```
>>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist()
>>> for token in top_3_tokens:
... print(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way is a spiral galaxy.
The Milky Way is a massive galaxy.
The Milky Way is a small galaxy.```
Tokenize the text and return the `input_ids` as TensorFlow tensors. You’ll also need to specify the position of the `<mask>` token:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model")
>>> inputs = tokenizer(text, return_tensors="tf")
>>> mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1]```
Pass your inputs to the model and return the `logits` of the masked token:
```
>>> from transformers import TFAutoModelForMaskedLM
>>> model = TFAutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model")
>>> logits = model(**inputs).logits
>>> mask_token_logits = logits[0, mask_token_index, :]```
Then return the three masked tokens with the highest probability and print them out:
```
>>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy()
>>> for token in top_3_tokens:
... print(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way is a spiral galaxy.
The Milky Way is a massive galaxy.
The Milky Way is a small galaxy.``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Masked language modeling">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/tasks/masked_language_modeling">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Masked language modeling</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"masked-language-modeling","sections":[{"local":"load-eli5-dataset","title":"Load ELI5 dataset"},{"local":"preprocess","title":"Preprocess"},{"local":"train","title":"Train"},{"local":"inference","title":"Inference"}],"title":"Masked language modeling"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":true,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","isExpanded":true,"id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"tasks/masked_language_modeling","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Masked language modeling"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Masked language modeling</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/sequence_classification">Text classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/token_classification">Token classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/question_answering">Question answering </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/language_modeling">Causal language modeling </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/tasks/masked_language_modeling">Masked language modeling </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/translation">Translation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/summarization">Summarization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="masked-language-modeling" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#masked-language-modeling"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Masked language modeling</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/mqElG5QJWUg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p>Masked language modeling predicts a masked token in a sequence, and the model can attend to tokens bidirectionally. This
means the model has full access to the tokens on the left and right. Masked language modeling is great for tasks that
require a good contextual understanding of an entire sequence. BERT is an example of a masked language model.</p> <p>This guide will show you how to:</p> <ol><li>Finetune <a href="https://huggingface.co/distilroberta-base" rel="nofollow">DistilRoBERTa</a> on the <a href="https://www.reddit.com/r/askscience/" rel="nofollow">r/askscience</a> subset of the <a href="https://huggingface.co/datasets/eli5" rel="nofollow">ELI5</a> dataset.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">You can finetune other architectures for masked language modeling following the same steps in this guide.
Choose one of the following architectures:
<p><a href="../model_doc/albert">ALBERT</a>, <a href="../model_doc/bart">BART</a>, <a href="../model_doc/bert">BERT</a>, <a href="../model_doc/big_bird">BigBird</a>, <a href="../model_doc/camembert">CamemBERT</a>, <a href="../model_doc/convbert">ConvBERT</a>, <a href="../model_doc/data2vec-text">Data2VecText</a>, <a href="../model_doc/deberta">DeBERTa</a>, <a href="../model_doc/deberta-v2">DeBERTa-v2</a>, <a href="../model_doc/distilbert">DistilBERT</a>, <a href="../model_doc/electra">ELECTRA</a>, <a href="../model_doc/ernie">ERNIE</a>, <a href="../model_doc/esm">ESM</a>, <a href="../model_doc/flaubert">FlauBERT</a>, <a href="../model_doc/fnet">FNet</a>, <a href="../model_doc/funnel">Funnel Transformer</a>, <a href="../model_doc/ibert">I-BERT</a>, <a href="../model_doc/layoutlm">LayoutLM</a>, <a href="../model_doc/longformer">Longformer</a>, <a href="../model_doc/luke">LUKE</a>, <a href="../model_doc/mbart">mBART</a>, <a href="../model_doc/mega">MEGA</a>, <a href="../model_doc/megatron-bert">Megatron-BERT</a>, <a href="../model_doc/mobilebert">MobileBERT</a>, <a href="../model_doc/mpnet">MPNet</a>, <a href="../model_doc/mvp">MVP</a>, <a href="../model_doc/nezha">Nezha</a>, <a href="../model_doc/nystromformer">Nyströmformer</a>, <a href="../model_doc/perceiver">Perceiver</a>, <a href="../model_doc/qdqbert">QDQBert</a>, <a href="../model_doc/reformer">Reformer</a>, <a href="../model_doc/rembert">RemBERT</a>, <a href="../model_doc/roberta">RoBERTa</a>, <a href="../model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm</a>, <a href="../model_doc/roc_bert">RoCBert</a>, <a href="../model_doc/roformer">RoFormer</a>, <a href="../model_doc/squeezebert">SqueezeBERT</a>, <a href="../model_doc/tapas">TAPAS</a>, <a href="../model_doc/wav2vec2">Wav2Vec2</a>, <a href="../model_doc/xlm">XLM</a>, <a href="../model_doc/xlm-roberta">XLM-RoBERTa</a>, <a href="../model_doc/xlm-roberta-xl">XLM-RoBERTa-XL</a>, <a href="../model_doc/xmod">X-MOD</a>, <a href="../model_doc/yoso">YOSO</a></p></div> <p>Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install transformers datasets evaluate</pre></div> <p>We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login
<span class="hljs-meta">>>> </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-eli5-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-eli5-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load ELI5 dataset</span></h2> <p>Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library. This’ll
give you a chance to experiment and make sure everything works before spending more time training on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span>eli5 = load_dataset(<span class="hljs-string">"eli5"</span>, split=<span class="hljs-string">"train_asks[:5000]"</span>)</pre></div> <p>Split the dataset’s <code>train_asks</code> split into a train and test set with the <a href="https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>eli5 = eli5.train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p>Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>eli5[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>]
{<span class="hljs-string">'answers'</span>: {<span class="hljs-string">'a_id'</span>: [<span class="hljs-string">'c3d1aib'</span>, <span class="hljs-string">'c3d4lya'</span>],
<span class="hljs-string">'score'</span>: [<span class="hljs-number">6</span>, <span class="hljs-number">3</span>],
<span class="hljs-string">'text'</span>: [<span class="hljs-string">"The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."</span>,
<span class="hljs-string">"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"</span>]},
<span class="hljs-string">'answers_urls'</span>: {<span class="hljs-string">'url'</span>: []},
<span class="hljs-string">'document'</span>: <span class="hljs-string">''</span>,
<span class="hljs-string">'q_id'</span>: <span class="hljs-string">'nyxfp'</span>,
<span class="hljs-string">'selftext'</span>: <span class="hljs-string">'_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?'</span>,
<span class="hljs-string">'selftext_urls'</span>: {<span class="hljs-string">'url'</span>: [<span class="hljs-string">'http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'</span>]},
<span class="hljs-string">'subreddit'</span>: <span class="hljs-string">'askscience'</span>,
<span class="hljs-string">'title'</span>: <span class="hljs-string">'Few questions about this space walk photograph.'</span>,
<span class="hljs-string">'title_urls'</span>: {<span class="hljs-string">'url'</span>: []}}</pre></div> <p>While this may look like a lot, you’re only really interested in the <code>text</code> field. What’s cool about language modeling tasks is you don’t need labels (also known as an unsupervised task) because the next word <em>is</em> the label.</p> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Preprocess</span></h2> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/8PmhEIXhBvI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p>For masked language modeling, the next step is to load a DistilRoBERTa tokenizer to process the <code>text</code> subfield:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"distilroberta-base"</span>)</pre></div> <p>You’ll notice from the example above, the <code>text</code> field is actually nested inside <code>answers</code>. This means you’ll need to e
xtract the <code>text</code> subfield from its nested structure with the <a href="https://huggingface.co/docs/datasets/process.html#flatten" rel="nofollow"><code>flatten</code></a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>eli5 = eli5.flatten()
<span class="hljs-meta">>>> </span>eli5[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>]
{<span class="hljs-string">'answers.a_id'</span>: [<span class="hljs-string">'c3d1aib'</span>, <span class="hljs-string">'c3d4lya'</span>],
<span class="hljs-string">'answers.score'</span>: [<span class="hljs-number">6</span>, <span class="hljs-number">3</span>],
<span class="hljs-string">'answers.text'</span>: [<span class="hljs-string">"The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."</span>,
<span class="hljs-string">"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"</span>],
<span class="hljs-string">'answers_urls.url'</span>: [],
<span class="hljs-string">'document'</span>: <span class="hljs-string">''</span>,
<span class="hljs-string">'q_id'</span>: <span class="hljs-string">'nyxfp'</span>,
<span class="hljs-string">'selftext'</span>: <span class="hljs-string">'_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?'</span>,
<span class="hljs-string">'selftext_urls.url'</span>: [<span class="hljs-string">'http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'</span>],
<span class="hljs-string">'subreddit'</span>: <span class="hljs-string">'askscience'</span>,
<span class="hljs-string">'title'</span>: <span class="hljs-string">'Few questions about this space walk photograph.'</span>,
<span class="hljs-string">'title_urls.url'</span>: []}</pre></div> <p>Each subfield is now a separate column as indicated by the <code>answers</code> prefix, and the <code>text</code> field is a list now. Instead
of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.</p> <p>Here is a first preprocessing function to join the list of strings for each example and tokenize the result:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>):
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> tokenizer([<span class="hljs-string">" "</span>.join(x) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"answers.text"</span>]])</pre></div> <p>To apply this preprocessing function over the entire dataset, use the 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> method. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once, and increasing the number of processes with <code>num_proc</code>. Remove any columns you don’t need:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>tokenized_eli5 = eli5.<span class="hljs-built_in">map</span>(
<span class="hljs-meta">... </span> preprocess_function,
<span class="hljs-meta">... </span> batched=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span> num_proc=<span class="hljs-number">4</span>,
<span class="hljs-meta">... </span> remove_columns=eli5[<span class="hljs-string">"train"</span>].column_names,
<span class="hljs-meta">... </span>)</pre></div> <p>This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.</p> <p>You can now use a second preprocessing function to</p> <ul><li>concatenate all the sequences</li> <li>split the concatenated sequences into shorter chunks defined by <code>block_size</code>, which should be both shorter than the maximum input length and short enough for your GPU RAM.</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>block_size = <span class="hljs-number">128</span>
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">group_texts</span>(<span class="hljs-params">examples</span>):
<span class="hljs-meta">... </span> <span class="hljs-comment"># Concatenate all texts.</span>
<span class="hljs-meta">... </span> concatenated_examples = {k: <span class="hljs-built_in">sum</span>(examples[k], []) <span class="hljs-keyword">for</span> k <span class="hljs-keyword">in</span> examples.keys()}
<span class="hljs-meta">... </span> total_length = <span class="hljs-built_in">len</span>(concatenated_examples[<span class="hljs-built_in">list</span>(examples.keys())[<span class="hljs-number">0</span>]])
<span class="hljs-meta">... </span> <span class="hljs-comment"># We drop the small remainder, we could add padding if the model supported it instead of this drop, you can</span>
<span class="hljs-meta">... </span> <span class="hljs-comment"># customize this part to your needs.</span>
<span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> total_length >= block_size:
<span class="hljs-meta">... </span> total_length = (total_length // block_size) * block_size
<span class="hljs-meta">... </span> <span class="hljs-comment"># Split by chunks of block_size.</span>
<span class="hljs-meta">... </span> result = {
<span class="hljs-meta">... </span> k: [t[i : i + block_size] <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">0</span>, total_length, block_size)]
<span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> k, t <span class="hljs-keyword">in</span> concatenated_examples.items()
<span class="hljs-meta">... </span> }
<span class="hljs-meta">... </span> result[<span class="hljs-string">"labels"</span>] = result[<span class="hljs-string">"input_ids"</span>].copy()
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> result</pre></div> <p>Apply the <code>group_texts</code> function over the entire dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>lm_dataset = tokenized_eli5.<span class="hljs-built_in">map</span>(group_texts, batched=<span class="hljs-literal">True</span>, num_proc=<span class="hljs-number">4</span>)</pre></div> <p>Now create a batch of examples using <a href="/docs/transformers/v4.30.0/en/main_classes/data_collator#transformers.DataCollatorForLanguageModeling">DataCollatorForLanguageModeling</a>. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p>Use the end-of-sequence token as the padding token and specify <code>mlm_probability</code> to randomly mask tokens each time you iterate over the data:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForLanguageModeling
<span class="hljs-meta">>>> </span>tokenizer.pad_token = tokenizer.eos_token
<span class="hljs-meta">>>> </span>data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=<span class="hljs-number">0.15</span>)</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p>Use the end-of-sequence token as the padding token and specify <code>mlm_probability</code> to randomly mask tokens each time you iterate over the data:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForLanguageModeling
<span class="hljs-meta">>>> </span>data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=<span class="hljs-number">0.15</span>, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p>You’re ready to start training your model now! Load DistilRoBERTa with <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModelForMaskedLM">AutoModelForMaskedLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForMaskedLM
<span class="hljs-meta">>>> </span>model = AutoModelForMaskedLM.from_pretrained(<span class="hljs-string">"distilroberta-base"</span>)</pre></div> <p>At this point, only three steps remain:</p> <ol><li>Define your training hyperparameters in <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model).</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, datasets, and data collator.</li> <li>Call <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>training_args = TrainingArguments(
<span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_eli5_mlm_model"</span>,
<span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>,
<span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">2e-5</span>,
<span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">3</span>,
<span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>,
<span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>trainer = Trainer(
<span class="hljs-meta">... </span> model=model,
<span class="hljs-meta">... </span> args=training_args,
<span class="hljs-meta">... </span> train_dataset=lm_dataset[<span class="hljs-string">"train"</span>],
<span class="hljs-meta">... </span> eval_dataset=lm_dataset[<span class="hljs-string">"test"</span>],
<span class="hljs-meta">... </span> data_collator=data_collator,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>trainer.train()</pre></div> <p>Once training is completed, use the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.evaluate">evaluate()</a> method to evaluate your model and get its perplexity:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> math
<span class="hljs-meta">>>> </span>eval_results = trainer.evaluate()
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(<span class="hljs-string">f"Perplexity: <span class="hljs-subst">{math.exp(eval_results[<span class="hljs-string">'eval_loss'</span>]):<span class="hljs-number">.2</span>f}</span>"</span>)
Perplexity: <span class="hljs-number">8.76</span></pre></div> <p>Then share your model to the Hub with the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div>
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
<div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer, AdamWeightDecay
<span class="hljs-meta">>>> </span>optimizer = AdamWeightDecay(learning_rate=<span class="hljs-number">2e-5</span>, weight_decay_rate=<span class="hljs-number">0.01</span>)</pre></div> <p>Then you can load DistilRoBERTa with <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.TFAutoModelForMaskedLM">TFAutoModelForMaskedLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForMaskedLM
<span class="hljs-meta">>>> </span>model = TFAutoModelForMaskedLM.from_pretrained(<span class="hljs-string">"distilroberta-base"</span>)</pre></div> <p>Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>tf_train_set = model.prepare_tf_dataset(
<span class="hljs-meta">... </span> lm_dataset[<span class="hljs-string">"train"</span>],
<span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>,
<span class="hljs-meta">... </span> collate_fn=data_collator,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>tf_test_set = model.prepare_tf_dataset(
<span class="hljs-meta">... </span> lm_dataset[<span class="hljs-string">"test"</span>],
<span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>,
<span class="hljs-meta">... </span> collate_fn=data_collator,
<span class="hljs-meta">... </span>)</pre></div> <p>Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
<span class="hljs-meta">>>> </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p>This can be done by specifying where to push your model and tokenizer in the <a href="/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback
<span class="hljs-meta">>>> </span>callback = PushToHubCallback(
<span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_eli5_mlm_model"</span>,
<span class="hljs-meta">... </span> tokenizer=tokenizer,
<span class="hljs-meta">... </span>)</pre></div> <p>Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callback to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=<span class="hljs-number">3</span>, callbacks=[callback])</pre></div> <p>Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding
<a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb" rel="nofollow">PyTorch notebook</a>
or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Inference</span></h2> <p>Great, now that you’ve finetuned a model, you can use it for inference!</p> <p>Come up with some text you’d like the model to fill in the blank with, and use the special <code><mask></code> token to indicate the blank:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>text = <span class="hljs-string">"The Milky Way is a <mask> galaxy."</span></pre></div> <p>The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for fill-mask with your model, and pass your text to it. If you like, you can use the <code>top_k</code> parameter to specify how many predictions to return:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline
<span class="hljs-meta">>>> </span>mask_filler = pipeline(<span class="hljs-string">"fill-mask"</span>, <span class="hljs-string">"stevhliu/my_awesome_eli5_mlm_model"</span>)
<span class="hljs-meta">>>> </span>mask_filler(text, top_k=<span class="hljs-number">3</span>)
[{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.5150994658470154</span>,
<span class="hljs-string">'token'</span>: <span class="hljs-number">21300</span>,
<span class="hljs-string">'token_str'</span>: <span class="hljs-string">' spiral'</span>,
<span class="hljs-string">'sequence'</span>: <span class="hljs-string">'The Milky Way is a spiral galaxy.'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.07087188959121704</span>,
<span class="hljs-string">'token'</span>: <span class="hljs-number">2232</span>,
<span class="hljs-string">'token_str'</span>: <span class="hljs-string">' massive'</span>,
<span class="hljs-string">'sequence'</span>: <span class="hljs-string">'The Milky Way is a massive galaxy.'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.06434620916843414</span>,
<span class="hljs-string">'token'</span>: <span class="hljs-number">650</span>,
<span class="hljs-string">'token_str'</span>: <span class="hljs-string">' small'</span>,
<span class="hljs-string">'sequence'</span>: <span class="hljs-string">'The Milky Way is a small galaxy.'</span>}]</pre></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p>Tokenize the text and return the <code>input_ids</code> as PyTorch tensors. You’ll also need to specify the position of the <code><mask></code> token:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_eli5_mlm_model"</span>)
<span class="hljs-meta">>>> </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span>mask_token_index = torch.where(inputs[<span class="hljs-string">"input_ids"</span>] == tokenizer.mask_token_id)[<span class="hljs-number">1</span>]</pre></div> <p>Pass your inputs to the model and return the <code>logits</code> of the masked token:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForMaskedLM
<span class="hljs-meta">>>> </span>model = AutoModelForMaskedLM.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_eli5_mlm_model"</span>)
<span class="hljs-meta">>>> </span>logits = model(**inputs).logits
<span class="hljs-meta">>>> </span>mask_token_logits = logits[<span class="hljs-number">0</span>, mask_token_index, :]</pre></div> <p>Then return the three masked tokens with the highest probability and print them out:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>top_3_tokens = torch.topk(mask_token_logits, <span class="hljs-number">3</span>, dim=<span class="hljs-number">1</span>).indices[<span class="hljs-number">0</span>].tolist()
<span class="hljs-meta">>>> </span><span class="hljs-keyword">for</span> token <span class="hljs-keyword">in</span> top_3_tokens:
<span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way <span class="hljs-keyword">is</span> a spiral galaxy.
The Milky Way <span class="hljs-keyword">is</span> a massive galaxy.
The Milky Way <span class="hljs-keyword">is</span> a small galaxy.</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p>Tokenize the text and return the <code>input_ids</code> as TensorFlow tensors. You’ll also need to specify the position of the <code><mask></code> token:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_eli5_mlm_model"</span>)
<span class="hljs-meta">>>> </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"tf"</span>)
<span class="hljs-meta">>>> </span>mask_token_index = tf.where(inputs[<span class="hljs-string">"input_ids"</span>] == tokenizer.mask_token_id)[<span class="hljs-number">0</span>, <span class="hljs-number">1</span>]</pre></div> <p>Pass your inputs to the model and return the <code>logits</code> of the masked token:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForMaskedLM
<span class="hljs-meta">>>> </span>model = TFAutoModelForMaskedLM.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_eli5_mlm_model"</span>)
<span class="hljs-meta">>>> </span>logits = model(**inputs).logits
<span class="hljs-meta">>>> </span>mask_token_logits = logits[<span class="hljs-number">0</span>, mask_token_index, :]</pre></div> <p>Then return the three masked tokens with the highest probability and print them out:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>top_3_tokens = tf.math.top_k(mask_token_logits, <span class="hljs-number">3</span>).indices.numpy()
<span class="hljs-meta">>>> </span><span class="hljs-keyword">for</span> token <span class="hljs-keyword">in</span> top_3_tokens:
<span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way <span class="hljs-keyword">is</span> a spiral galaxy.
The Milky Way <span class="hljs-keyword">is</span> a massive galaxy.
The Milky Way <span class="hljs-keyword">is</span> a small galaxy.</pre></div></div></div> </div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/tasks/language_modeling" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Causal language modeling</a>
<a href="/docs/transformers/tasks/translation" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Translation<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Masked language modeling","isExpanded":true,"id":"masked-language-modeling","url":"#masked-language-modeling","sections":[{"title":"Load ELI5 dataset","isExpanded":true,"id":"load-eli5-dataset","url":"#load-eli5-dataset"},{"title":"Preprocess","isExpanded":true,"id":"preprocess","url":"#preprocess"},{"title":"Train","isExpanded":true,"id":"train","url":"#train"},{"title":"Inference","isExpanded":true,"id":"inference","url":"#inference"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#masked-language-modeling" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-masked-language-modeling"><wbr>Masked language modeling</a> <a href="#load-eli5-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-eli5-dataset"><wbr>Load EL<wbr>I5 dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/tasks/masked_language_modeling" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/tasks/masked_language_modeling");
}
</script>
<iframe name="__privateStripeMetricsController6370" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Ftasks%2Fmasked_language_modeling&title=Masked%20language%20modeling&referrer=&muid=NA&sid=NA&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:54:59.946Z |
Multiple choice | https://huggingface.co/docs/transformers/tasks/multiple_choice | A multiple choice task is similar to question answering, except several candidate answers are provided along with a context and the model is trained to select the correct answer.
This guide will show you how to:
1. Finetune [BERT](https://huggingface.co/bert-base-uncased) on the `regular` configuration of the [SWAG](https://huggingface.co/datasets/swag) dataset to select the best answer given multiple options and some context.
2. Use your finetuned model for inference.
The task illustrated in this tutorial is supported by the following model architectures:
[ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso)
Before you begin, make sure you have all the necessary libraries installed:
```
pip install transformers datasets evaluate```
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
```
>>> from huggingface_hub import notebook_login
>>> notebook_login()```
## [](#load-swag-dataset)Load SWAG dataset
Start by loading the `regular` configuration of the SWAG dataset from the 🤗 Datasets library:
```
>>> from datasets import load_dataset
>>> swag = load_dataset("swag", "regular")```
Then take a look at an example:
```
>>> swag["train"][0]
{'ending0': 'passes by walking down the street playing their instruments.',
'ending1': 'has heard approaching them.',
'ending2': "arrives and they're outside dancing and asleep.",
'ending3': 'turns the lead singer watches the performance.',
'fold-ind': '3416',
'gold-source': 'gold',
'label': 0,
'sent1': 'Members of the procession walk down the street holding small horn brass instruments.',
'sent2': 'A drum line',
'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line',
'video-id': 'anetv_jkn6uvmqwh4'}```
While it looks like there are a lot of fields here, it is actually pretty straightforward:
- `sent1` and `sent2`: these fields show how a sentence starts, and if you put the two together, you get the `startphrase` field.
- `ending`: suggests a possible ending for how a sentence can end, but only one of them is correct.
- `label`: identifies the correct sentence ending.
## [](#preprocess)Preprocess
The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")```
The preprocessing function you want to create needs to:
1. Make four copies of the `sent1` field and combine each of them with `sent2` to recreate how a sentence starts.
2. Combine `sent2` with each of the four possible sentence endings.
3. Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding `input_ids`, `attention_mask`, and `labels` field.
```
>>> ending_names = ["ending0", "ending1", "ending2", "ending3"]
>>> def preprocess_function(examples):
... first_sentences = [[context] * 4 for context in examples["sent1"]]
... question_headers = examples["sent2"]
... second_sentences = [
... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers)
... ]
... first_sentences = sum(first_sentences, [])
... second_sentences = sum(second_sentences, [])
... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()}```
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once:
```
tokenized_swag = swag.map(preprocess_function, batched=True)```
🤗 Transformers doesn’t have a data collator for multiple choice, so you’ll need to adapt the [DataCollatorWithPadding](/docs/transformers/v4.30.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding) to create a batch of examples. It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
`DataCollatorForMultipleChoice` flattens all the model inputs, applies padding, and then unflattens the results:
```
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import torch
>>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None
... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, [])
... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="pt",
... )
... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()}
... batch["labels"] = torch.tensor(labels, dtype=torch.int64)
... return batch```
```
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import tensorflow as tf
>>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None
... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, [])
... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="tf",
... )
... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()}
... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
... return batch```
## [](#evaluate)Evaluate
Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric):
```
>>> import evaluate
>>> accuracy = evaluate.load("accuracy")```
Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the accuracy:
```
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions, labels = eval_pred
... predictions = np.argmax(predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=labels)```
Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training.
## [](#train)Train
If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!
You’re ready to start training your model now! Load BERT with [AutoModelForMultipleChoice](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModelForMultipleChoice):
```
>>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer
>>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-uncased")```
At this point, only three steps remain:
1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer) will evaluate the accuracy and save the training checkpoint.
2. Pass the training arguments to [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function.
3. Call [train()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model.
```
>>> training_args = TrainingArguments(
... output_dir="my_awesome_swag_model",
... evaluation_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... learning_rate=5e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_swag["train"],
... eval_dataset=tokenized_swag["validation"],
... tokenizer=tokenizer,
... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
... compute_metrics=compute_metrics,
... )
>>> trainer.train()```
Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model:
```
>>> trainer.push_to_hub()```
If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
```
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_train_epochs = 2
>>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs
>>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps)```
Then you can load BERT with [TFAutoModelForMultipleChoice](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.TFAutoModelForMultipleChoice):
```
>>> from transformers import TFAutoModelForMultipleChoice
>>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-uncased")```
Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset):
```
>>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_swag["train"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
>>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_swag["validation"],
... shuffle=False,
... batch_size=batch_size,
... collate_fn=data_collator,
... )```
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:
```
>>> model.compile(optimizer=optimizer) ```
The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [KerasMetricCallback](/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback):
```
>>> from transformers.keras_callbacks import KerasMetricCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)```
Specify where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback):
```
>>> from transformers.keras_callbacks import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_model",
... tokenizer=tokenizer,
... )```
Then bundle your callbacks together:
```
>>> callbacks = [metric_callback, push_to_hub_callback]```
Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:
```
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks)```
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb).
## [](#inference)Inference
Great, now that you’ve finetuned a model, you can use it for inference!
Come up with some text and two candidate answers:
```
>>> prompt = "France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette."
>>> candidate1 = "The law does not apply to croissants and brioche."
>>> candidate2 = "The law applies to baguettes."```
Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some `labels`:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True)
>>> labels = torch.tensor(0).unsqueeze(0)```
Pass your inputs and labels to the model and return the `logits`:
```
>>> from transformers import AutoModelForMultipleChoice
>>> model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
>>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels)
>>> logits = outputs.logits```
Get the class with the highest probability:
```
>>> predicted_class = logits.argmax().item()
>>> predicted_class
'0'```
Tokenize each prompt and candidate answer pair and return TensorFlow tensors:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True)```
Pass your inputs to the model and return the `logits`:
```
>>> from transformers import TFAutoModelForMultipleChoice
>>> model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
>>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()}
>>> outputs = model(inputs)
>>> logits = outputs.logits```
Get the class with the highest probability:
```
>>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0])
>>> predicted_class
'0'``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Multiple choice">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/tasks/multiple_choice">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Multiple choice</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"multiple-choice","sections":[{"local":"load-swag-dataset","title":"Load SWAG dataset"},{"local":"preprocess","title":"Preprocess"},{"local":"evaluate","title":"Evaluate"},{"local":"train","title":"Train"},{"local":"inference","title":"Inference"}],"title":"Multiple choice"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":true,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","isExpanded":true,"id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"tasks/multiple_choice","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Multiple choice"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Multiple choice</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/sequence_classification">Text classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/token_classification">Token classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/question_answering">Question answering </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/language_modeling">Causal language modeling </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/masked_language_modeling">Masked language modeling </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/translation">Translation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/summarization">Summarization </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="multiple-choice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#multiple-choice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Multiple choice</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p>A multiple choice task is similar to question answering, except several candidate answers are provided along with a context and the model is trained to select the correct answer.</p> <p>This guide will show you how to:</p> <ol><li>Finetune <a href="https://huggingface.co/bert-base-uncased" rel="nofollow">BERT</a> on the <code>regular</code> configuration of the <a href="https://huggingface.co/datasets/swag" rel="nofollow">SWAG</a> dataset to select the best answer given multiple options and some context.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures:
<p><a href="../model_doc/albert">ALBERT</a>, <a href="../model_doc/bert">BERT</a>, <a href="../model_doc/big_bird">BigBird</a>, <a href="../model_doc/camembert">CamemBERT</a>, <a href="../model_doc/canine">CANINE</a>, <a href="../model_doc/convbert">ConvBERT</a>, <a href="../model_doc/data2vec-text">Data2VecText</a>, <a href="../model_doc/deberta-v2">DeBERTa-v2</a>, <a href="../model_doc/distilbert">DistilBERT</a>, <a href="../model_doc/electra">ELECTRA</a>, <a href="../model_doc/ernie">ERNIE</a>, <a href="../model_doc/ernie_m">ErnieM</a>, <a href="../model_doc/flaubert">FlauBERT</a>, <a href="../model_doc/fnet">FNet</a>, <a href="../model_doc/funnel">Funnel Transformer</a>, <a href="../model_doc/ibert">I-BERT</a>, <a href="../model_doc/longformer">Longformer</a>, <a href="../model_doc/luke">LUKE</a>, <a href="../model_doc/mega">MEGA</a>, <a href="../model_doc/megatron-bert">Megatron-BERT</a>, <a href="../model_doc/mobilebert">MobileBERT</a>, <a href="../model_doc/mpnet">MPNet</a>, <a href="../model_doc/nezha">Nezha</a>, <a href="../model_doc/nystromformer">Nyströmformer</a>, <a href="../model_doc/qdqbert">QDQBert</a>, <a href="../model_doc/rembert">RemBERT</a>, <a href="../model_doc/roberta">RoBERTa</a>, <a href="../model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm</a>, <a href="../model_doc/roc_bert">RoCBert</a>, <a href="../model_doc/roformer">RoFormer</a>, <a href="../model_doc/squeezebert">SqueezeBERT</a>, <a href="../model_doc/xlm">XLM</a>, <a href="../model_doc/xlm-roberta">XLM-RoBERTa</a>, <a href="../model_doc/xlm-roberta-xl">XLM-RoBERTa-XL</a>, <a href="../model_doc/xlnet">XLNet</a>, <a href="../model_doc/xmod">X-MOD</a>, <a href="../model_doc/yoso">YOSO</a></p></div> <p>Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install transformers datasets evaluate</pre></div> <p>We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login
<span class="hljs-meta">>>> </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-swag-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-swag-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load SWAG dataset</span></h2> <p>Start by loading the <code>regular</code> configuration of the SWAG dataset from the 🤗 Datasets library:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span>swag = load_dataset(<span class="hljs-string">"swag"</span>, <span class="hljs-string">"regular"</span>)</pre></div> <p>Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>swag[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>]
{<span class="hljs-string">'ending0'</span>: <span class="hljs-string">'passes by walking down the street playing their instruments.'</span>,
<span class="hljs-string">'ending1'</span>: <span class="hljs-string">'has heard approaching them.'</span>,
<span class="hljs-string">'ending2'</span>: <span class="hljs-string">"arrives and they're outside dancing and asleep."</span>,
<span class="hljs-string">'ending3'</span>: <span class="hljs-string">'turns the lead singer watches the performance.'</span>,
<span class="hljs-string">'fold-ind'</span>: <span class="hljs-string">'3416'</span>,
<span class="hljs-string">'gold-source'</span>: <span class="hljs-string">'gold'</span>,
<span class="hljs-string">'label'</span>: <span class="hljs-number">0</span>,
<span class="hljs-string">'sent1'</span>: <span class="hljs-string">'Members of the procession walk down the street holding small horn brass instruments.'</span>,
<span class="hljs-string">'sent2'</span>: <span class="hljs-string">'A drum line'</span>,
<span class="hljs-string">'startphrase'</span>: <span class="hljs-string">'Members of the procession walk down the street holding small horn brass instruments. A drum line'</span>,
<span class="hljs-string">'video-id'</span>: <span class="hljs-string">'anetv_jkn6uvmqwh4'</span>}</pre></div> <p>While it looks like there are a lot of fields here, it is actually pretty straightforward:</p> <ul><li><code>sent1</code> and <code>sent2</code>: these fields show how a sentence starts, and if you put the two together, you get the <code>startphrase</code> field.</li> <li><code>ending</code>: suggests a possible ending for how a sentence can end, but only one of them is correct.</li> <li><code>label</code>: identifies the correct sentence ending.</li></ul> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Preprocess</span></h2> <p>The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)</pre></div> <p>The preprocessing function you want to create needs to:</p> <ol><li>Make four copies of the <code>sent1</code> field and combine each of them with <code>sent2</code> to recreate how a sentence starts.</li> <li>Combine <code>sent2</code> with each of the four possible sentence endings.</li> <li>Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding <code>input_ids</code>, <code>attention_mask</code>, and <code>labels</code> field.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>ending_names = [<span class="hljs-string">"ending0"</span>, <span class="hljs-string">"ending1"</span>, <span class="hljs-string">"ending2"</span>, <span class="hljs-string">"ending3"</span>]
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>):
<span class="hljs-meta">... </span> first_sentences = [[context] * <span class="hljs-number">4</span> <span class="hljs-keyword">for</span> context <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"sent1"</span>]]
<span class="hljs-meta">... </span> question_headers = examples[<span class="hljs-string">"sent2"</span>]
<span class="hljs-meta">... </span> second_sentences = [
<span class="hljs-meta">... </span> [<span class="hljs-string">f"<span class="hljs-subst">{header}</span> <span class="hljs-subst">{examples[end][i]}</span>"</span> <span class="hljs-keyword">for</span> end <span class="hljs-keyword">in</span> ending_names] <span class="hljs-keyword">for</span> i, header <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(question_headers)
<span class="hljs-meta">... </span> ]
<span class="hljs-meta">... </span> first_sentences = <span class="hljs-built_in">sum</span>(first_sentences, [])
<span class="hljs-meta">... </span> second_sentences = <span class="hljs-built_in">sum</span>(second_sentences, [])
<span class="hljs-meta">... </span> tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=<span class="hljs-literal">True</span>)
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {k: [v[i : i + <span class="hljs-number">4</span>] <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">0</span>, <span class="hljs-built_in">len</span>(v), <span class="hljs-number">4</span>)] <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> tokenized_examples.items()}</pre></div> <p>To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> method. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>tokenized_swag = swag.<span class="hljs-built_in">map</span>(preprocess_function, batched=<span class="hljs-literal">True</span>)</pre></div> <p>🤗 Transformers doesn’t have a data collator for multiple choice, so you’ll need to adapt the <a href="/docs/transformers/v4.30.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding">DataCollatorWithPadding</a> to create a batch of examples. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <p><code>DataCollatorForMultipleChoice</code> flattens all the model inputs, applies padding, and then unflattens the results:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> dataclasses <span class="hljs-keyword">import</span> dataclass
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers.tokenization_utils_base <span class="hljs-keyword">import</span> PreTrainedTokenizerBase, PaddingStrategy
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> <span class="hljs-type">Optional</span>, <span class="hljs-type">Union</span>
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span>@dataclass
<span class="hljs-meta">... </span><span class="hljs-keyword">class</span> <span class="hljs-title class_">DataCollatorForMultipleChoice</span>:
<span class="hljs-meta">... </span> <span class="hljs-string">"""
<span class="hljs-meta">... </span> Data collator that will dynamically pad the inputs for multiple choice received.
<span class="hljs-meta">... </span> """</span>
<span class="hljs-meta">... </span> tokenizer: PreTrainedTokenizerBase
<span class="hljs-meta">... </span> padding: <span class="hljs-type">Union</span>[<span class="hljs-built_in">bool</span>, <span class="hljs-built_in">str</span>, PaddingStrategy] = <span class="hljs-literal">True</span>
<span class="hljs-meta">... </span> max_length: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = <span class="hljs-literal">None</span>
<span class="hljs-meta">... </span> pad_to_multiple_of: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = <span class="hljs-literal">None</span>
<span class="hljs-meta">... </span> <span class="hljs-keyword">def</span> <span class="hljs-title function_">__call__</span>(<span class="hljs-params">self, features</span>):
<span class="hljs-meta">... </span> label_name = <span class="hljs-string">"label"</span> <span class="hljs-keyword">if</span> <span class="hljs-string">"label"</span> <span class="hljs-keyword">in</span> features[<span class="hljs-number">0</span>].keys() <span class="hljs-keyword">else</span> <span class="hljs-string">"labels"</span>
<span class="hljs-meta">... </span> labels = [feature.pop(label_name) <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features]
<span class="hljs-meta">... </span> batch_size = <span class="hljs-built_in">len</span>(features)
<span class="hljs-meta">... </span> num_choices = <span class="hljs-built_in">len</span>(features[<span class="hljs-number">0</span>][<span class="hljs-string">"input_ids"</span>])
<span class="hljs-meta">... </span> flattened_features = [
<span class="hljs-meta">... </span> [{k: v[i] <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> feature.items()} <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(num_choices)] <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features
<span class="hljs-meta">... </span> ]
<span class="hljs-meta">... </span> flattened_features = <span class="hljs-built_in">sum</span>(flattened_features, [])
<span class="hljs-meta">... </span> batch = self.tokenizer.pad(
<span class="hljs-meta">... </span> flattened_features,
<span class="hljs-meta">... </span> padding=self.padding,
<span class="hljs-meta">... </span> max_length=self.max_length,
<span class="hljs-meta">... </span> pad_to_multiple_of=self.pad_to_multiple_of,
<span class="hljs-meta">... </span> return_tensors=<span class="hljs-string">"pt"</span>,
<span class="hljs-meta">... </span> )
<span class="hljs-meta">... </span> batch = {k: v.view(batch_size, num_choices, -<span class="hljs-number">1</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> batch.items()}
<span class="hljs-meta">... </span> batch[<span class="hljs-string">"labels"</span>] = torch.tensor(labels, dtype=torch.int64)
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> dataclasses <span class="hljs-keyword">import</span> dataclass
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers.tokenization_utils_base <span class="hljs-keyword">import</span> PreTrainedTokenizerBase, PaddingStrategy
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> <span class="hljs-type">Optional</span>, <span class="hljs-type">Union</span>
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
<span class="hljs-meta">>>> </span>@dataclass
<span class="hljs-meta">... </span><span class="hljs-keyword">class</span> <span class="hljs-title class_">DataCollatorForMultipleChoice</span>:
<span class="hljs-meta">... </span> <span class="hljs-string">"""
<span class="hljs-meta">... </span> Data collator that will dynamically pad the inputs for multiple choice received.
<span class="hljs-meta">... </span> """</span>
<span class="hljs-meta">... </span> tokenizer: PreTrainedTokenizerBase
<span class="hljs-meta">... </span> padding: <span class="hljs-type">Union</span>[<span class="hljs-built_in">bool</span>, <span class="hljs-built_in">str</span>, PaddingStrategy] = <span class="hljs-literal">True</span>
<span class="hljs-meta">... </span> max_length: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = <span class="hljs-literal">None</span>
<span class="hljs-meta">... </span> pad_to_multiple_of: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = <span class="hljs-literal">None</span>
<span class="hljs-meta">... </span> <span class="hljs-keyword">def</span> <span class="hljs-title function_">__call__</span>(<span class="hljs-params">self, features</span>):
<span class="hljs-meta">... </span> label_name = <span class="hljs-string">"label"</span> <span class="hljs-keyword">if</span> <span class="hljs-string">"label"</span> <span class="hljs-keyword">in</span> features[<span class="hljs-number">0</span>].keys() <span class="hljs-keyword">else</span> <span class="hljs-string">"labels"</span>
<span class="hljs-meta">... </span> labels = [feature.pop(label_name) <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features]
<span class="hljs-meta">... </span> batch_size = <span class="hljs-built_in">len</span>(features)
<span class="hljs-meta">... </span> num_choices = <span class="hljs-built_in">len</span>(features[<span class="hljs-number">0</span>][<span class="hljs-string">"input_ids"</span>])
<span class="hljs-meta">... </span> flattened_features = [
<span class="hljs-meta">... </span> [{k: v[i] <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> feature.items()} <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(num_choices)] <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features
<span class="hljs-meta">... </span> ]
<span class="hljs-meta">... </span> flattened_features = <span class="hljs-built_in">sum</span>(flattened_features, [])
<span class="hljs-meta">... </span> batch = self.tokenizer.pad(
<span class="hljs-meta">... </span> flattened_features,
<span class="hljs-meta">... </span> padding=self.padding,
<span class="hljs-meta">... </span> max_length=self.max_length,
<span class="hljs-meta">... </span> pad_to_multiple_of=self.pad_to_multiple_of,
<span class="hljs-meta">... </span> return_tensors=<span class="hljs-string">"tf"</span>,
<span class="hljs-meta">... </span> )
<span class="hljs-meta">... </span> batch = {k: tf.reshape(v, (batch_size, num_choices, -<span class="hljs-number">1</span>)) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> batch.items()}
<span class="hljs-meta">... </span> batch[<span class="hljs-string">"labels"</span>] = tf.convert_to_tensor(labels, dtype=tf.int64)
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch</pre></div></div></div> </div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Evaluate</span></h2> <p>Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/accuracy" rel="nofollow">accuracy</a> metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> evaluate
<span class="hljs-meta">>>> </span>accuracy = evaluate.load(<span class="hljs-string">"accuracy"</span>)</pre></div> <p>Then create a function that passes your predictions and labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the accuracy:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>):
<span class="hljs-meta">... </span> predictions, labels = eval_pred
<span class="hljs-meta">... </span> predictions = np.argmax(predictions, axis=<span class="hljs-number">1</span>)
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> accuracy.compute(predictions=predictions, references=labels)</pre></div> <p>Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p>You’re ready to start training your model now! Load BERT with <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModelForMultipleChoice">AutoModelForMultipleChoice</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForMultipleChoice, TrainingArguments, Trainer
<span class="hljs-meta">>>> </span>model = AutoModelForMultipleChoice.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)</pre></div> <p>At this point, only three steps remain:</p> <ol><li>Define your training hyperparameters in <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the accuracy and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>training_args = TrainingArguments(
<span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_swag_model"</span>,
<span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>,
<span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"epoch"</span>,
<span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">5e-5</span>,
<span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">16</span>,
<span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">16</span>,
<span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">3</span>,
<span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>,
<span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>trainer = Trainer(
<span class="hljs-meta">... </span> model=model,
<span class="hljs-meta">... </span> args=training_args,
<span class="hljs-meta">... </span> train_dataset=tokenized_swag[<span class="hljs-string">"train"</span>],
<span class="hljs-meta">... </span> eval_dataset=tokenized_swag[<span class="hljs-string">"validation"</span>],
<span class="hljs-meta">... </span> tokenizer=tokenizer,
<span class="hljs-meta">... </span> data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
<span class="hljs-meta">... </span> compute_metrics=compute_metrics,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>trainer.train()</pre></div> <p>Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div>
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
<div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer
<span class="hljs-meta">>>> </span>batch_size = <span class="hljs-number">16</span>
<span class="hljs-meta">>>> </span>num_train_epochs = <span class="hljs-number">2</span>
<span class="hljs-meta">>>> </span>total_train_steps = (<span class="hljs-built_in">len</span>(tokenized_swag[<span class="hljs-string">"train"</span>]) // batch_size) * num_train_epochs
<span class="hljs-meta">>>> </span>optimizer, schedule = create_optimizer(init_lr=<span class="hljs-number">5e-5</span>, num_warmup_steps=<span class="hljs-number">0</span>, num_train_steps=total_train_steps)</pre></div> <p>Then you can load BERT with <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.TFAutoModelForMultipleChoice">TFAutoModelForMultipleChoice</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForMultipleChoice
<span class="hljs-meta">>>> </span>model = TFAutoModelForMultipleChoice.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)</pre></div> <p>Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
<span class="hljs-meta">>>> </span>tf_train_set = model.prepare_tf_dataset(
<span class="hljs-meta">... </span> tokenized_swag[<span class="hljs-string">"train"</span>],
<span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span> batch_size=batch_size,
<span class="hljs-meta">... </span> collate_fn=data_collator,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>tf_validation_set = model.prepare_tf_dataset(
<span class="hljs-meta">... </span> tokenized_swag[<span class="hljs-string">"validation"</span>],
<span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> batch_size=batch_size,
<span class="hljs-meta">... </span> collate_fn=data_collator,
<span class="hljs-meta">... </span>)</pre></div> <p>Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p>The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using <a href="../main_classes/keras_callbacks">Keras callbacks</a>.</p> <p>Pass your <code>compute_metrics</code> function to <a href="/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback">KerasMetricCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> KerasMetricCallback
<span class="hljs-meta">>>> </span>metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)</pre></div> <p>Specify where to push your model and tokenizer in the <a href="/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback
<span class="hljs-meta">>>> </span>push_to_hub_callback = PushToHubCallback(
<span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_model"</span>,
<span class="hljs-meta">... </span> tokenizer=tokenizer,
<span class="hljs-meta">... </span>)</pre></div> <p>Then bundle your callbacks together:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>callbacks = [metric_callback, push_to_hub_callback]</pre></div> <p>Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=<span class="hljs-number">2</span>, callbacks=callbacks)</pre></div> <p>Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding
<a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb" rel="nofollow">PyTorch notebook</a>
or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Inference</span></h2> <p>Great, now that you’ve finetuned a model, you can use it for inference!</p> <p>Come up with some text and two candidate answers:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>prompt = <span class="hljs-string">"France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette."</span>
<span class="hljs-meta">>>> </span>candidate1 = <span class="hljs-string">"The law does not apply to croissants and brioche."</span>
<span class="hljs-meta">>>> </span>candidate2 = <span class="hljs-string">"The law applies to baguettes."</span></pre></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p>Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some <code>labels</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_swag_model"</span>)
<span class="hljs-meta">>>> </span>inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>)
<span class="hljs-meta">>>> </span>labels = torch.tensor(<span class="hljs-number">0</span>).unsqueeze(<span class="hljs-number">0</span>)</pre></div> <p>Pass your inputs and labels to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForMultipleChoice
<span class="hljs-meta">>>> </span>model = AutoModelForMultipleChoice.from_pretrained(<span class="hljs-string">"my_awesome_swag_model"</span>)
<span class="hljs-meta">>>> </span>outputs = model(**{k: v.unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> inputs.items()}, labels=labels)
<span class="hljs-meta">>>> </span>logits = outputs.logits</pre></div> <p>Get the class with the highest probability:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>predicted_class = logits.argmax().item()
<span class="hljs-meta">>>> </span>predicted_class
<span class="hljs-string">'0'</span></pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p>Tokenize each prompt and candidate answer pair and return TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_swag_model"</span>)
<span class="hljs-meta">>>> </span>inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors=<span class="hljs-string">"tf"</span>, padding=<span class="hljs-literal">True</span>)</pre></div> <p>Pass your inputs to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForMultipleChoice
<span class="hljs-meta">>>> </span>model = TFAutoModelForMultipleChoice.from_pretrained(<span class="hljs-string">"my_awesome_swag_model"</span>)
<span class="hljs-meta">>>> </span>inputs = {k: tf.expand_dims(v, <span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> inputs.items()}
<span class="hljs-meta">>>> </span>outputs = model(inputs)
<span class="hljs-meta">>>> </span>logits = outputs.logits</pre></div> <p>Get the class with the highest probability:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>predicted_class = <span class="hljs-built_in">int</span>(tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>])
<span class="hljs-meta">>>> </span>predicted_class
<span class="hljs-string">'0'</span></pre></div></div></div> </div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/tasks/summarization" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Summarization</a>
<a href="/docs/transformers/tasks/audio_classification" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Audio classification<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Multiple choice","isExpanded":true,"id":"multiple-choice","url":"#multiple-choice","sections":[{"title":"Load SWAG dataset","isExpanded":true,"id":"load-swag-dataset","url":"#load-swag-dataset"},{"title":"Preprocess","isExpanded":true,"id":"preprocess","url":"#preprocess"},{"title":"Evaluate","isExpanded":true,"id":"evaluate","url":"#evaluate"},{"title":"Train","isExpanded":true,"id":"train","url":"#train"},{"title":"Inference","isExpanded":true,"id":"inference","url":"#inference"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#multiple-choice" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-multiple-choice"><wbr>Multiple choice</a> <a href="#load-swag-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-swag-dataset"><wbr>Load SWA<wbr>G dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/tasks/multiple_choice" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/tasks/multiple_choice");
}
</script>
<iframe name="__privateStripeMetricsController1320" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Ftasks%2Fmultiple_choice&title=Multiple%20choice&referrer=&muid=NA&sid=NA&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:00.150Z |
https://huggingface.co/docs/transformers/parallelism | The documentation page PARALLELISM doesn’t exist in v4.30.0, but exists on the main version. Click [here](/docs/transformers/main/en/parallelism) to redirect to the main version of the documentation. | <html><head></head><body>The documentation page PARALLELISM doesn’t exist in v4.30.0, but exists on the main version. Click <a href="/docs/transformers/main/en/parallelism">here</a> to redirect to the main version of the documentation.</body></html> | 2023-06-27T19:55:00.199Z |
|
Speech Encoder Decoder Models | https://huggingface.co/docs/transformers/model_doc/speech-encoder-decoder | The [SpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) can be used to initialize a speech-to-text model with any pretrained speech autoencoding model as the encoder (_e.g._ [Wav2Vec2](wav2vec2), [Hubert](hubert)) and any pretrained autoregressive model as the decoder.
The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech recognition and speech translation has _e.g._ been shown in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
An example of how to use a [SpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) for inference can be seen in [Speech2Text2](speech_to_text_2).
## [](#randomly-initializing-speechencoderdecodermodel-from-model-configurations)Randomly initializing `SpeechEncoderDecoderModel` from model configurations.
[SpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [Wav2Vec2Model](/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model) configuration for the encoder and the default `BertForCausalLM` configuration for the decoder.
```
>>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
>>> config_encoder = Wav2Vec2Config()
>>> config_decoder = BertConfig()
>>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
>>> model = SpeechEncoderDecoderModel(config=config)```
## [](#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder)Initialising `SpeechEncoderDecoderModel` from a pretrained encoder and a pretrained decoder.
[SpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, _e.g._ [Wav2Vec2](wav2vec2), [Hubert](hubert) can serve as the encoder and both pretrained auto-encoding models, _e.g._ BERT, pretrained causal language models, _e.g._ GPT2, as well as the pretrained decoder part of sequence-to-sequence models, _e.g._ decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [SpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the _Warm-starting-encoder-decoder blog post_](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `SpeechEncoderDecoderModel` class provides a [SpeechEncoderDecoderModel.from\_encoder\_decoder\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained) method.
```
>>> from transformers import SpeechEncoderDecoderModel
>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
... "facebook/hubert-large-ll60k", "bert-base-uncased"
... )```
## [](#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference)Loading an existing `SpeechEncoderDecoderModel` checkpoint and perform inference.
To load fine-tuned checkpoints of the `SpeechEncoderDecoderModel` class, [SpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) provides the `from_pretrained(...)` method just like any other model architecture in Transformers.
To perform inference, one uses the `generate` method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
```
>>> from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel
>>> from datasets import load_dataset
>>> import torch
>>>
>>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
>>>
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
>>>
>>> generated_ids = model.generate(input_values)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
>>> print(generated_text)
Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.```
## [](#training)Training
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: `input_values` (which are the speech inputs) and `labels` (which are the `input_ids` of the encoded target sequence).
```
>>> from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel
>>> from datasets import load_dataset
>>> encoder_id = "facebook/wav2vec2-base-960h"
>>> decoder_id = "bert-base-uncased"
>>> feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
>>> tokenizer = AutoTokenizer.from_pretrained(decoder_id)
>>>
>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id)
>>> model.config.decoder_start_token_id = tokenizer.cls_token_id
>>> model.config.pad_token_id = tokenizer.pad_token_id
>>>
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values
>>>
>>> labels = tokenizer(ds[0]["text"], return_tensors="pt").input_ids
>>>
>>> loss = model(**input_features).loss
>>> loss.backward()```
## [](#transformers.SpeechEncoderDecoderConfig)SpeechEncoderDecoderConfig
### class transformers.SpeechEncoderDecoderConfig
[](#transformers.SpeechEncoderDecoderConfig)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L27)
( \*\*kwargs )
Parameters
- [](#transformers.SpeechEncoderDecoderConfig.kwargs)**kwargs** (_optional_) — Dictionary of keyword arguments. Notably:
- **encoder** ([PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig), _optional_) — An instance of a configuration object that defines the encoder config.
- **decoder** ([PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig), _optional_) — An instance of a configuration object that defines the decoder config.
[SpeechEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig) is the configuration class to store the configuration of a [SpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel). It is used to instantiate an Encoder Decoder model according to the specified arguments, defining the encoder and decoder configs.
Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information.
Examples:
```
>>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
>>>
>>> config_encoder = Wav2Vec2Config()
>>> config_decoder = BertConfig()
>>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
>>>
>>> model = SpeechEncoderDecoderModel(config=config)
>>>
>>> config_encoder = model.config.encoder
>>> config_decoder = model.config.decoder
>>>
>>> config_decoder.is_decoder = True
>>> config_decoder.add_cross_attention = True
>>>
>>> model.save_pretrained("my-model")
>>>
>>> encoder_decoder_config = SpeechEncoderDecoderConfig.from_pretrained("my-model")
>>> model = SpeechEncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config)```
#### from\_encoder\_decoder\_configs
[](#transformers.SpeechEncoderDecoderConfig.from_encoder_decoder_configs)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L93)
( encoder\_config: PretrainedConfigdecoder\_config: PretrainedConfig\*\*kwargs ) → [SpeechEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig)
An instance of a configuration object
Instantiate a [SpeechEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig) (or a derived class) from a pre-trained encoder model configuration and decoder model configuration.
#### to\_dict
[](#transformers.SpeechEncoderDecoderConfig.to_dict)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L110)
( ) → `Dict[str, any]`
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default _to\_dict()_ from _PretrainedConfig_.
## [](#transformers.SpeechEncoderDecoderModel)SpeechEncoderDecoderModel
### class transformers.SpeechEncoderDecoderModel
[](#transformers.SpeechEncoderDecoderModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L173)
( config: typing.Optional\[transformers.configuration\_utils.PretrainedConfig\] = Noneencoder: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = Nonedecoder: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = None )
Parameters
- [](#transformers.SpeechEncoderDecoderModel.config)**config** ([SpeechEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) it is shown how leveraging large pretrained speech models for speech translation yields a significant performance improvement.
After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
[SpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with one of the base model classes of the library as encoder and another one as decoder when created with the :meth_~transformers.AutoModel.from\_pretrained_ class method for the encoder and :meth_~transformers.AutoModelForCausalLM.from\_pretrained_ class method for the decoder.
#### forward
[](#transformers.SpeechEncoderDecoderModel.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L442)
( inputs: typing.Optional\[torch.FloatTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.LongTensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Nonedecoder\_inputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Noneinput\_values: typing.Optional\[torch.FloatTensor\] = Noneinput\_features: typing.Optional\[torch.FloatTensor\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.modeling\_outputs.Seq2SeqLMOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput) or `tuple(torch.FloatTensor)`
The [SpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
```
>>> from transformers import SpeechEncoderDecoderModel, AutoProcessor
>>> from datasets import load_dataset
>>> import torch
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
>>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
>>>
>>> generated = model.generate(input_values)
>>> decoded = processor.batch_decode(generated, skip_special_tokens=True)[0]
>>> decoded
'Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.'
>>>
>>> labels = processor(text=ds[0]["text"], return_tensors="pt").input_ids
>>> loss = model(input_values, labels=labels).loss
>>> loss.backward()```
#### from\_encoder\_decoder\_pretrained
[](#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L287)
( encoder\_pretrained\_model\_name\_or\_path: str = Nonedecoder\_pretrained\_model\_name\_or\_path: str = None\*model\_args\*\*kwargs )
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train the model, you need to first set it back in training mode with `model.train()`.
Example:
```
>>> from transformers import SpeechEncoderDecoderModel
>>>
>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
... "facebook/wav2vec2-base-960h", "bert-base-uncased"
... )
>>>
>>> model.save_pretrained("./wav2vec2bert")
>>>
>>> model = SpeechEncoderDecoderModel.from_pretrained("./wav2vec2bert")```
## [](#transformers.FlaxSpeechEncoderDecoderModel)FlaxSpeechEncoderDecoderModel
### class transformers.FlaxSpeechEncoderDecoderModel
[](#transformers.FlaxSpeechEncoderDecoderModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L329)
( config: SpeechEncoderDecoderConfiginput\_shape: typing.Optional\[typing.Tuple\] = Noneseed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs )
Parameters
- [](#transformers.FlaxSpeechEncoderDecoderModel.config)**config** ([SpeechEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights.
- [](#transformers.FlaxSpeechEncoderDecoderModel.dtype)**dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`.
**Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.**
If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16).
This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) it is shown how leveraging large pretrained speech models for speech translation yields a significant performance improvement.
After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
[FlaxSpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.FlaxSpeechEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base model classes of the library as encoder module and another one as decoder module when created with the :meth_~transformers.FlaxAutoModel.from\_pretrained_ class method for the encoder and :meth_~transformers.FlaxAutoModelForCausalLM.from\_pretrained_ class method for the decoder.
#### \_\_call\_\_
[](#transformers.FlaxSpeechEncoderDecoderModel.__call__)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L660)
( inputs: ndarrayattention\_mask: typing.Optional\[jax.\_src.numpy.ndarray.ndarray\] = Nonedecoder\_input\_ids: typing.Optional\[jax.\_src.numpy.ndarray.ndarray\] = Nonedecoder\_attention\_mask: typing.Optional\[jax.\_src.numpy.ndarray.ndarray\] = Nonedecoder\_position\_ids: typing.Optional\[jax.\_src.numpy.ndarray.ndarray\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonetrain: bool = Falsefreeze\_feature\_encoder: bool = Falseparams: dict = Nonedropout\_rng: PRNGKey = None ) → [transformers.modeling\_flax\_outputs.FlaxSeq2SeqLMOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput) or `tuple(torch.FloatTensor)`
The [FlaxSpeechEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.FlaxSpeechEncoderDecoderModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
```
>>> from transformers import FlaxSpeechEncoderDecoderModel, AutoTokenizer
>>>
>>> model = FlaxSpeechEncoderDecoderModel.from_pretrained("patrickvonplaten/wav2vec2-2-bart-large")
>>>
>>> tokenizer_output = AutoTokenizer.from_pretrained("facebook/bart-large")
>>> inputs = jnp.ones((2, 5000), dtype=jnp.float32)
>>>
>>> model.config.decoder_start_token_id = model.decoder.config.bos_token_id
>>> model.config.pad_token_id = model.decoder.config.pad_token_id
>>> model.config.eos_token_id = model.decoder.config.eos_token_id
>>> outputs = model.generate(inputs)
```
#### from\_encoder\_decoder\_pretrained
[](#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L782)
( encoder\_pretrained\_model\_name\_or\_path: typing.Union\[str, os.PathLike, NoneType\] = Nonedecoder\_pretrained\_model\_name\_or\_path: typing.Union\[str, os.PathLike, NoneType\] = None\*model\_args\*\*kwargs )
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
Example:
```
>>> from transformers import FlaxSpeechEncoderDecoderModel
>>>
>>> model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
... "facebook/wav2vec2-large-lv60", "facebook/bart-large"
... )
>>>
>>> model.save_pretrained("./wav2vec2-2-bart-large")
>>>
>>> model = FlaxSpeechEncoderDecoderModel.from_pretrained("./wav2vec2-2-bart-large")``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Speech Encoder Decoder Models">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/model_doc/speech-encoder-decoder">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Speech Encoder Decoder Models</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"speech-encoder-decoder-models","sections":[{"local":"randomly-initializing-speechencoderdecodermodel-from-model-configurations","title":"Randomly initializing `SpeechEncoderDecoderModel` from model configurations."},{"local":"initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder","title":"Initialising `SpeechEncoderDecoderModel` from a pretrained encoder and a pretrained decoder."},{"local":"loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference","title":"Loading an existing `SpeechEncoderDecoderModel` checkpoint and perform inference."},{"local":"training","title":"Training"},{"local":"transformers.SpeechEncoderDecoderConfig","title":"SpeechEncoderDecoderConfig"},{"local":"transformers.SpeechEncoderDecoderModel","title":"SpeechEncoderDecoderModel"},{"local":"transformers.FlaxSpeechEncoderDecoderModel","title":"FlaxSpeechEncoderDecoderModel"}],"title":"Speech Encoder Decoder Models"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":true,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","isExpanded":true,"id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"model_doc/speech-encoder-decoder","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Speech Encoder Decoder Models"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Speech Encoder Decoder Models</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/align">ALIGN </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/altclip">AltCLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blip">BLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blip-2">BLIP-2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bridgetower">BridgeTower </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/chinese_clip">Chinese-CLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/clip">CLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/clipseg">CLIPSeg </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/data2vec">Data2Vec </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/deplot">DePlot </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/donut">Donut </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flava">FLAVA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/git">GIT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/groupvit">GroupViT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutlm">LayoutLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutlmv2">LayoutLMV2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutlmv3">LayoutLMV3 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutxlm">LayoutXLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/lilt">LiLT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/lxmert">LXMERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/matcha">MatCha </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mgp-str">MGP-STR </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/oneformer">OneFormer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/owlvit">OWL-ViT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/perceiver">Perceiver </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/pix2struct">Pix2Struct </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/sam">Segment Anything </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/model_doc/speech-encoder-decoder">Speech Encoder Decoder Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/tapas">TAPAS </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/trocr">TrOCR </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/tvlt">TVLT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/vilt">ViLT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/vision-encoder-decoder">Vision Encoder Decoder Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/vision-text-dual-encoder">Vision Text Dual Encoder </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/visual_bert">VisualBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xclip">X-CLIP </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="speech-encoder-decoder-models" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#speech-encoder-decoder-models"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Speech Encoder Decoder Models</span></h1> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> can be used to initialize a speech-to-text model
with any pretrained speech autoencoding model as the encoder (<em>e.g.</em> <a href="wav2vec2">Wav2Vec2</a>, <a href="hubert">Hubert</a>) and any pretrained autoregressive model as the decoder.</p> <p>The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech
recognition and speech translation has <em>e.g.</em> been shown in <a href="https://arxiv.org/abs/2104.06678" rel="nofollow">Large-Scale Self- and Semi-Supervised Learning for Speech
Translation</a> by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli,
Alexis Conneau.</p> <p>An example of how to use a <a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> for inference can be seen in <a href="speech_to_text_2">Speech2Text2</a>.</p> <h2 class="relative group"><a id="randomly-initializing-speechencoderdecodermodel-from-model-configurations" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#randomly-initializing-speechencoderdecodermodel-from-model-configurations"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Randomly initializing <code>SpeechEncoderDecoderModel</code> from model configurations.</span></h2> <p><a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model">Wav2Vec2Model</a> configuration for the encoder
and the default <code>BertForCausalLM</code> configuration for the decoder.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
<span class="hljs-meta">>>> </span>config_encoder = Wav2Vec2Config()
<span class="hljs-meta">>>> </span>config_decoder = BertConfig()
<span class="hljs-meta">>>> </span>config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
<span class="hljs-meta">>>> </span>model = SpeechEncoderDecoderModel(config=config)</pre></div> <h2 class="relative group"><a id="initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Initialising <code>SpeechEncoderDecoderModel</code> from a pretrained encoder and a pretrained decoder.</span></h2> <p><a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, <em>e.g.</em> <a href="wav2vec2">Wav2Vec2</a>, <a href="hubert">Hubert</a> can serve as the encoder and both pretrained auto-encoding models, <em>e.g.</em> BERT, pretrained causal language models, <em>e.g.</em> GPT2, as well as the pretrained decoder part of sequence-to-sequence models, <em>e.g.</em> decoder of BART, can be used as the decoder.
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing <a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in <a href="https://huggingface.co/blog/warm-starting-encoder-decoder" rel="nofollow">the <em>Warm-starting-encoder-decoder blog post</em></a>.
To do so, the <code>SpeechEncoderDecoderModel</code> class provides a <a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained">SpeechEncoderDecoderModel.from_encoder_decoder_pretrained()</a> method.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SpeechEncoderDecoderModel
<span class="hljs-meta">>>> </span>model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"facebook/hubert-large-ll60k"</span>, <span class="hljs-string">"bert-base-uncased"</span>
<span class="hljs-meta">... </span>)</pre></div> <h2 class="relative group"><a id="loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Loading an existing <code>SpeechEncoderDecoderModel</code> checkpoint and perform inference.</span></h2> <p>To load fine-tuned checkpoints of the <code>SpeechEncoderDecoderModel</code> class, <a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> provides the <code>from_pretrained(...)</code> method just like any other model architecture in Transformers.</p> <p>To perform inference, one uses the <code>generate</code> method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Wav2Vec2Processor, SpeechEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load a fine-tuned speech translation model and corresponding processor</span>
<span class="hljs-meta">>>> </span>model = SpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-xls-r-300m-en-to-15"</span>)
<span class="hljs-meta">>>> </span>processor = Wav2Vec2Processor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-xls-r-300m-en-to-15"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># let's perform inference on a piece of English speech (which we'll translate to German)</span>
<span class="hljs-meta">>>> </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>)
<span class="hljs-meta">>>> </span>input_values = processor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values
<span class="hljs-meta">>>> </span><span class="hljs-comment"># autoregressively generate transcription (uses greedy decoding by default)</span>
<span class="hljs-meta">>>> </span>generated_ids = model.generate(input_values)
<span class="hljs-meta">>>> </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>]
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(generated_text)
Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.</pre></div> <h2 class="relative group"><a id="training" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#training"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Training</span></h2> <p>Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs.
As you can see, only 2 inputs are required for the model in order to compute a loss: <code>input_values</code> (which are the
speech inputs) and <code>labels</code> (which are the <code>input_ids</code> of the encoded target sequence).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span>encoder_id = <span class="hljs-string">"facebook/wav2vec2-base-960h"</span> <span class="hljs-comment"># acoustic model encoder</span>
<span class="hljs-meta">>>> </span>decoder_id = <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-comment"># text decoder</span>
<span class="hljs-meta">>>> </span>feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(decoder_id)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model</span>
<span class="hljs-meta">>>> </span>model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id)
<span class="hljs-meta">>>> </span>model.config.decoder_start_token_id = tokenizer.cls_token_id
<span class="hljs-meta">>>> </span>model.config.pad_token_id = tokenizer.pad_token_id
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load an audio input and pre-process (normalise mean/std to 0/1)</span>
<span class="hljs-meta">>>> </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>)
<span class="hljs-meta">>>> </span>input_values = feature_extractor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load its corresponding transcription and tokenize to generate labels</span>
<span class="hljs-meta">>>> </span>labels = tokenizer(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_ids
<span class="hljs-meta">>>> </span><span class="hljs-comment"># the forward function automatically creates the correct decoder_input_ids</span>
<span class="hljs-meta">>>> </span>loss = model(**input_features).loss
<span class="hljs-meta">>>> </span>loss.backward()</pre></div> <h2 class="relative group"><a id="transformers.SpeechEncoderDecoderConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>SpeechEncoderDecoderConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SpeechEncoderDecoderConfig</span></span></h3> <a id="transformers.SpeechEncoderDecoderConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L27" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderConfig.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderConfig.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) —
Dictionary of keyword arguments. Notably:<p></p>
<ul>
<li><strong>encoder</strong> (<a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a>, <em>optional</em>) — An instance of a configuration object that defines
the encoder config.</li>
<li><strong>decoder</strong> (<a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a>, <em>optional</em>) — An instance of a configuration object that defines
the decoder config.</li>
</ul></span></span> </li></ul> </div></div> <p><a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a> is the configuration class to store the configuration of a
<a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a>. It is used to instantiate an Encoder Decoder model according to the specified
arguments, defining the encoder and decoder configs.</p> <p>Configuration objects inherit from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the
documentation from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.SpeechEncoderDecoderConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a Wav2Vec2 & BERT style configuration</span>
<span class="hljs-meta">>>> </span>config_encoder = Wav2Vec2Config()
<span class="hljs-meta">>>> </span>config_decoder = BertConfig()
<span class="hljs-meta">>>> </span>config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a Wav2Vec2Bert model from a Wav2Vec2 & bert-base-uncased style configurations</span>
<span class="hljs-meta">>>> </span>model = SpeechEncoderDecoderModel(config=config)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Accessing the model configuration</span>
<span class="hljs-meta">>>> </span>config_encoder = model.config.encoder
<span class="hljs-meta">>>> </span>config_decoder = model.config.decoder
<span class="hljs-meta">>>> </span><span class="hljs-comment"># set decoder config to causal lm</span>
<span class="hljs-meta">>>> </span>config_decoder.is_decoder = <span class="hljs-literal">True</span>
<span class="hljs-meta">>>> </span>config_decoder.add_cross_attention = <span class="hljs-literal">True</span>
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Saving the model, including its configuration</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"my-model"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># loading model and config from pretrained folder</span>
<span class="hljs-meta">>>> </span>encoder_decoder_config = SpeechEncoderDecoderConfig.from_pretrained(<span class="hljs-string">"my-model"</span>)
<span class="hljs-meta">>>> </span>model = SpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"my-model"</span>, config=encoder_decoder_config)</pre></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderConfig.from_encoder_decoder_configs"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_configs</span></h4> <a id="transformers.SpeechEncoderDecoderConfig.from_encoder_decoder_configs" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderConfig.from_encoder_decoder_configs"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L93" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.SpeechEncoderDecoderConfig.from_encoder_decoder_configs.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>An instance of a configuration object</p>
</p> </div></div> <p>Instantiate a <a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a> (or a derived class) from a pre-trained encoder model
configuration and decoder model configuration.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderConfig.to_dict"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>to_dict</span></h4> <a id="transformers.SpeechEncoderDecoderConfig.to_dict" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderConfig.to_dict"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L110" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>Dict[str, any]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.SpeechEncoderDecoderConfig.to_dict.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>Dict[str, any]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>Dictionary of all the attributes that make up this configuration instance,</p>
</p> </div></div> <p>Serializes this instance to a Python dictionary. Override the default <em>to_dict()</em> from <em>PretrainedConfig</em>.</p></div></div> <h2 class="relative group"><a id="transformers.SpeechEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>SpeechEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SpeechEncoderDecoderModel</span></span></h3> <a id="transformers.SpeechEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L173" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech
autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is
loaded via <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via
<a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function. Cross-attention layers are automatically added to the decoder
and should be fine-tuned on a downstream generative task, like summarization.</p> <p>The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.</p> <p>Additionally, in <a href="https://arxiv.org/abs/2104.06678" rel="nofollow">Large-Scale Self- and Semi-Supervised Learning for Speech
Translation</a> it is shown how leveraging large pretrained speech models for speech
translation yields a significant performance improvement.</p> <p>After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other
models (see the examples for more information).</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture with
one of the base model classes of the library as encoder and another one as decoder when created with the
:meth<em>~transformers.AutoModel.from_pretrained</em> class method for the encoder and
:meth<em>~transformers.AutoModelForCausalLM.from_pretrained</em> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.SpeechEncoderDecoderModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L442" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 16 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.inputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.inputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code> or <code>(batch_size, sequence_length, feature_dim)</code>, <em>optional</em>) —
Float values of input raw speech waveform or speech features. Values can be obtained by loading a <code>.flac</code>
or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile
library (<code>pip install soundfile</code>). To prepare the array into <code>inputs</code>, either the <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor">Wav2Vec2Processor</a> or
<a href="/docs/transformers/v4.30.0/en/model_doc/speech_to_text#transformers.Speech2TextProcessor">Speech2TextProcessor</a> should be used for padding and conversion into a tensor of type
<code>torch.FloatTensor</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Indices of decoder input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p>
<p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see
<code>past_key_values</code>).</p>
<p>For training, <code>decoder_input_ids</code> are automatically created by the model by shifting the <code>labels</code> to the
right, replacing -100 by the <code>pad_token_id</code> and prepending them with the <code>decoder_start_token_id</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also
be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>) —
This tuple must consist of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>)
<code>last_hidden_state</code> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) is a tensor
of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the
decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p>
<p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that
don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all
<code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the
model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded
representation. This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices
into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Labels for computing the masked language modeling loss for the decoder. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored
(masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) —
If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see
<code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Float values of input raw speech waveform. Values can be obtained by loading a <em>.flac</em> or <em>.wav</em> audio file
into an array of type <em>List[float]</em> or a <em>numpy.ndarray</em>, <em>e.g.</em> via the soundfile library (<em>pip install
soundfile</em>). To prepare the array into <em>input_values</em>, the <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor">Wav2Vec2Processor</a> should be used for padding
and conversion into a tensor of type <em>torch.FloatTensor</em>. See <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, feature_size)</code>, <em>optional</em>) —
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em>
via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the
<a href="/docs/transformers/v4.30.0/en/model_doc/speech_to_text#transformers.Speech2TextFeatureExtractor">Speech2TextFeatureExtractor</a> should be used for extracting the fbank features, padding and conversion
into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.30.0/en/model_doc/speech_to_text#transformers.Speech2TextFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
If set to <code>True</code>, the model will return a <code>~utils.Seq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:<p></p>
<ul>
<li>Without a prefix which will be input as <code>**encoder_kwargs</code> for the encoder forward function.</li>
<li>With a <em>decoder_</em> prefix which will be input as <code>**decoder_kwargs</code> for the decoder forward function.</li>
</ul></span></span> </li></ul> <div id="transformers.SpeechEncoderDecoderModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</p>
</li>
<li>
<p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p>
</li>
<li>
<p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape
<code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape
<code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p>
<p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p>
</li>
<li>
<p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
<li>
<p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.</p>
</li>
<li>
<p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p>
</li>
<li>
<p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.SpeechEncoderDecoderModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SpeechEncoderDecoderModel, AutoProcessor
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-xls-r-300m-en-to-15"</span>)
<span class="hljs-meta">>>> </span>model = SpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-xls-r-300m-en-to-15"</span>)
<span class="hljs-meta">>>> </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>)
<span class="hljs-meta">>>> </span>input_values = processor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Inference: Translate English speech to German</span>
<span class="hljs-meta">>>> </span>generated = model.generate(input_values)
<span class="hljs-meta">>>> </span>decoded = processor.batch_decode(generated, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>]
<span class="hljs-meta">>>> </span>decoded
<span class="hljs-string">'Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.'</span>
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Training: Train model on English transcription</span>
<span class="hljs-meta">>>> </span>labels = processor(text=ds[<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_ids
<span class="hljs-meta">>>> </span>loss = model(input_values, labels=labels).loss
<span class="hljs-meta">>>> </span>loss.backward()</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L287" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>) —
Information necessary to initiate the encoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a
user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
<li>A path or url to a <em>tensorflow index checkpoint file</em> (e.g, <code>./tf_model/model.ckpt.index</code>). In
this case, <code>from_tf</code> should be set to <code>True</code> and a configuration object should be provided as
<code>config</code> argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) —
Information necessary to initiate the decoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a
user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
<li>A path or url to a <em>tensorflow index checkpoint file</em> (e.g, <code>./tf_model/model.ckpt.index</code>). In
this case, <code>from_tf</code> should be set to <code>True</code> and a configuration object should be provided as
<code>config</code> argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) —
All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
<code>output_attentions=True</code>).<p></p>
<ul>
<li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li>
<li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li>
<li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li>
</ul>
<p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p>Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.</p> <p>The model is set in evaluation mode by default using <code>model.eval()</code> (Dropout modules are deactivated). To train
the model, you need to first set it back in training mode with <code>model.train()</code>.</p> <div class="relative group rounded-md"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SpeechEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># initialize a wav2vec2bert from a pretrained Wav2Vec2 and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized</span>
<span class="hljs-meta">>>> </span>model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"facebook/wav2vec2-base-960h"</span>, <span class="hljs-string">"bert-base-uncased"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># saving model after fine-tuning</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"./wav2vec2bert"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load fine-tuned model</span>
<span class="hljs-meta">>>> </span>model = SpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./wav2vec2bert"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxSpeechEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>FlaxSpeechEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxSpeechEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxSpeechEncoderDecoderModel</span></span></h3> <a id="transformers.FlaxSpeechEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxSpeechEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L329" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: SpeechEncoderDecoderConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Optional[typing.Tuple] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = <class 'jax.numpy.float32'></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) —
The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and
<code>jax.numpy.bfloat16</code> (on TPUs).<p></p>
<p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given <code>dtype</code>.</p>
<p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.</strong></p>
<p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p>This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech
autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is
loaded via <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via
<a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function. Cross-attention layers are automatically added to the decoder
and should be fine-tuned on a downstream generative task, like summarization.</p> <p>The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.</p> <p>Additionally, in <a href="https://arxiv.org/abs/2104.06678" rel="nofollow">Large-Scale Self- and Semi-Supervised Learning for Speech
Translation</a> it is shown how leveraging large pretrained speech models for speech
translation yields a significant performance improvement.</p> <p>After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other
models (see the examples for more information).</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a Flax Linen
<a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.FlaxSpeechEncoderDecoderModel">FlaxSpeechEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture
with the module (flax.nn.Module) of one of the base model classes of the library as encoder module and another one
as decoder module when created with the :meth<em>~transformers.FlaxAutoModel.from_pretrained</em> class method for the
encoder and :meth<em>~transformers.FlaxAutoModelForCausalLM.from_pretrained</em> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxSpeechEncoderDecoderModel.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxSpeechEncoderDecoderModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L660" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs<span class="opacity-60">: ndarray</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[jax._src.numpy.ndarray.ndarray] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[jax._src.numpy.ndarray.ndarray] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[jax._src.numpy.ndarray.ndarray] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_position_ids<span class="opacity-60">: typing.Optional[jax._src.numpy.ndarray.ndarray] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">freeze_feature_encoder<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.inputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.inputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code> or <code>(batch_size, sequence_length, feature_dim)</code>, <em>optional</em>) —
Float values of input raw speech waveform or speech features. Values can be obtained by loading a <code>.flac</code>
or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile
library (<code>pip install soundfile</code>). To prepare the array into <code>inputs</code>, either the <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor">Wav2Vec2Processor</a> or
<a href="/docs/transformers/v4.30.0/en/model_doc/speech_to_text#transformers.Speech2TextProcessor">Speech2TextProcessor</a> should be used for padding and conversion into a tensor of type
<code>torch.FloatTensor</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Indices of decoder input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p>
<p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see
<code>past_key_values</code>).</p>
<p>For sequence to sequence training, <code>decoder_input_ids</code> should be provided. <code>decoder_input_ids</code> should be
created outside of the model by shifting the <code>labels</code> to the right, replacing -100 by the <code>pad_token_id</code>
and prepending them with the <code>decoder_start_token_id</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also
be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range <code>[0, config.decoder.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
If set to <code>True</code>, the model will return a <code>~utils.FlaxSeq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxSpeechEncoderDecoderModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p>
</li>
<li>
<p><strong>past_key_values</strong> (<code>tuple(tuple(jnp.ndarray))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(jnp.ndarray)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape
<code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape
<code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p>
<p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p>
</li>
<li>
<p><strong>decoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape
<code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>decoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
<li>
<p><strong>cross_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.</p>
</li>
<li>
<p><strong>encoder_last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p>
</li>
<li>
<p><strong>encoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape
<code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>encoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/speech-encoder-decoder#transformers.FlaxSpeechEncoderDecoderModel">FlaxSpeechEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FlaxSpeechEncoderDecoderModel, AutoTokenizer
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load a fine-tuned wav2vec2-2-bart model</span>
<span class="hljs-meta">>>> </span>model = FlaxSpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"patrickvonplaten/wav2vec2-2-bart-large"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load output tokenizer</span>
<span class="hljs-meta">>>> </span>tokenizer_output = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/bart-large"</span>)
<span class="hljs-meta">>>> </span>inputs = jnp.ones((<span class="hljs-number">2</span>, <span class="hljs-number">5000</span>), dtype=jnp.float32)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># use bart's special bos, pad and eos tokens</span>
<span class="hljs-meta">>>> </span>model.config.decoder_start_token_id = model.decoder.config.bos_token_id
<span class="hljs-meta">>>> </span>model.config.pad_token_id = model.decoder.config.pad_token_id
<span class="hljs-meta">>>> </span>model.config.eos_token_id = model.decoder.config.eos_token_id
<span class="hljs-meta">>>> </span>outputs = model.generate(inputs)
<span class="hljs-comment"># Assert something? More interesting input? dtype correct?</span></pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L782" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: typing.Union[str, os.PathLike, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: typing.Union[str, os.PathLike, NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>Union[str, os.PathLike]</code>, <em>optional</em>) —
Information necessary to initiate the encoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a
user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>Union[str, os.PathLike]</code>, <em>optional</em>, defaults to <code>None</code>) —
Information necessary to initiate the decoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a
user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) —
All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
<code>output_attentions=True</code>).<p></p>
<ul>
<li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li>
<li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li>
<li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li>
</ul>
<p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p>Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.</p> <div class="relative group rounded-md"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FlaxSpeechEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># initialize a wav2vec2-2-bart from pretrained wav2vec2 and bart models. Note that the cross-attention layers will be randomly initialized</span>
<span class="hljs-meta">>>> </span>model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"facebook/wav2vec2-large-lv60"</span>, <span class="hljs-string">"facebook/bart-large"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># saving model after fine-tuning</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"./wav2vec2-2-bart-large"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load fine-tuned model</span>
<span class="hljs-meta">>>> </span>model = FlaxSpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./wav2vec2-2-bart-large"</span>)</pre></div></div></div></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/model_doc/sam" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Segment Anything</a>
<a href="/docs/transformers/model_doc/tapas" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">TAPAS<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Speech Encoder Decoder Models","isExpanded":true,"id":"speech-encoder-decoder-models","url":"#speech-encoder-decoder-models","sections":[{"title":"Randomly initializing `SpeechEncoderDecoderModel` from model configurations.","isExpanded":true,"id":"randomly-initializing-speechencoderdecodermodel-from-model-configurations","url":"#randomly-initializing-speechencoderdecodermodel-from-model-configurations"},{"title":"Initialising `SpeechEncoderDecoderModel` from a pretrained encoder and a pretrained decoder.","isExpanded":true,"id":"initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder","url":"#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"},{"title":"Loading an existing `SpeechEncoderDecoderModel` checkpoint and perform inference.","isExpanded":true,"id":"loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference","url":"#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference"},{"title":"Training","isExpanded":true,"id":"training","url":"#training"},{"title":"SpeechEncoderDecoderConfig","isExpanded":true,"id":"transformers.SpeechEncoderDecoderConfig","url":"#transformers.SpeechEncoderDecoderConfig"},{"title":"SpeechEncoderDecoderModel","isExpanded":true,"id":"transformers.SpeechEncoderDecoderModel","url":"#transformers.SpeechEncoderDecoderModel"},{"title":"FlaxSpeechEncoderDecoderModel","isExpanded":true,"id":"transformers.FlaxSpeechEncoderDecoderModel","url":"#transformers.FlaxSpeechEncoderDecoderModel"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#speech-encoder-decoder-models" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-speech-encoder-decoder-models"><wbr>Speech <wbr>Encoder <wbr>Decoder <wbr>Models</a> <a href="#randomly-initializing-speechencoderdecodermodel-from-model-configurations" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-randomly-initializing-speechencoderdecodermodel-from-model-configurations"><wbr>Randomly initializing `<wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model` from model configurations.</a> <a href="#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"><wbr>Initialising `<wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model` from a pretrained encoder and a pretrained decoder.</a> <a href="#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference"><wbr>Loading an existing `<wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model` checkpoint and perform inference.</a> <a href="#training" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-training"><wbr>Training</a> <a href="#transformers.SpeechEncoderDecoderConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SpeechEncoderDecoderConfig"><wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Config</a> <a href="#transformers.SpeechEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SpeechEncoderDecoderModel"><wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model</a> <a href="#transformers.FlaxSpeechEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxSpeechEncoderDecoderModel"><wbr>Flax<wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/model_doc/speech-encoder-decoder" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/model_doc/speech-encoder-decoder");
}
</script>
<iframe name="__privateStripeMetricsController1690" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fmodel_doc%2Fspeech-encoder-decoder&title=Speech%20Encoder%20Decoder%20Models&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:00.549Z |
https://huggingface.co/docs/transformers/tasks/question-answering | The documentation page TASKS/QUESTION-ANSWERING doesn’t exist in v4.30.0, but exists on the main version. Click [here](/docs/transformers/main/en/tasks/question-answering) to redirect to the main version of the documentation. | <html><head></head><body>The documentation page TASKS/QUESTION-ANSWERING doesn’t exist in v4.30.0, but exists on the main version. Click <a href="/docs/transformers/main/en/tasks/question-answering">here</a> to redirect to the main version of the documentation.</body></html> | 2023-06-27T19:55:00.613Z |
|
VisionTextDualEncoder | https://huggingface.co/docs/transformers/model_doc/vision-text-dual-encoder | ## [](#overview)Overview
The [VisionTextDualEncoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel) can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder (_e.g._ [ViT](vit), [BEiT](beit), [DeiT](deit)) and any pretrained text autoencoding model as the text encoder (_e.g._ [RoBERTa](roberta), [BERT](bert)). Two projection layers are added on top of both the vision and text encoder to project the output embeddings to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text training and then can be used for zero-shot vision tasks such image-classification or retrieval.
In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on new zero-shot vision tasks such as image classification or retrieval.
## [](#transformers.VisionTextDualEncoderConfig)VisionTextDualEncoderConfig
### class transformers.VisionTextDualEncoderConfig
[](#transformers.VisionTextDualEncoderConfig)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py#L28)
( projection\_dim = 512logit\_scale\_init\_value = 2.6592\*\*kwargs )
Parameters
- [](#transformers.VisionTextDualEncoderConfig.text_config)**text\_config** (`dict`) — Dictionary of configuration options that defines text model config.
- [](#transformers.VisionTextDualEncoderConfig.vision_config)**vision\_config** (`dict`) — Dictionary of configuration options that defines vison model config.
- [](#transformers.VisionTextDualEncoderConfig.projection_dim)**projection\_dim** (`int`, _optional_, defaults to 512) — Dimentionality of text and vision projection layers.
- [](#transformers.VisionTextDualEncoderConfig.logit_scale_init_value)**logit\_scale\_init\_value** (`float`, _optional_, defaults to 2.6592) — The inital value of the _logit\_scale_ paramter. Default is used as per the original CLIP implementation.
- [](#transformers.VisionTextDualEncoderConfig.kwargs)**kwargs** (_optional_) — Dictionary of keyword arguments.
[VisionTextDualEncoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig) is the configuration class to store the configuration of a [VisionTextDualEncoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel). It is used to instantiate [VisionTextDualEncoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel) model according to the specified arguments, defining the text model and vision model configs.
Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information.
Examples:
```
>>> from transformers import ViTConfig, BertConfig, VisionTextDualEncoderConfig, VisionTextDualEncoderModel
>>>
>>> config_vision = ViTConfig()
>>> config_text = BertConfig()
>>> config = VisionTextDualEncoderConfig.from_vision_text_configs(config_vision, config_text, projection_dim=512)
>>>
>>> model = VisionTextDualEncoderModel(config=config)
>>>
>>> config_vision = model.config.vision_config
>>> config_text = model.config.text_config
>>>
>>> model.save_pretrained("vit-bert")
>>>
>>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained("vit-bert")
>>> model = VisionTextDualEncoderModel.from_pretrained("vit-bert", config=vision_text_config)```
#### to\_dict
[](#transformers.VisionTextDualEncoderConfig.to_dict)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py#L117)
( ) → `Dict[str, any]`
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default [to\_dict()](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig.to_dict).
## [](#transformers.VisionTextDualEncoderProcessor)VisionTextDualEncoderProcessor
### class transformers.VisionTextDualEncoderProcessor
[](#transformers.VisionTextDualEncoderProcessor)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py#L25)
( image\_processor = Nonetokenizer = None\*\*kwargs )
Parameters
- [](#transformers.VisionTextDualEncoderProcessor.image_processor)**image\_processor** ([AutoImageProcessor](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor)) — The image processor is a required input.
- [](#transformers.VisionTextDualEncoderProcessor.tokenizer)**tokenizer** ([PreTrainedTokenizer](/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer)) — The tokenizer is a required input.
Constructs a VisionTextDualEncoder processor which wraps an image processor and a tokenizer into a single processor.
[VisionTextDualEncoderProcessor](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderProcessor) offers all the functionalities of [AutoImageProcessor](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor) and [AutoTokenizer](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer). See the `__call__()` and [decode()](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderProcessor.decode) for more information.
This method forwards all its arguments to VisionTextDualEncoderTokenizer’s [batch\_decode()](/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode). Please refer to the docstring of this method for more information.
This method forwards all its arguments to VisionTextDualEncoderTokenizer’s [decode()](/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode). Please refer to the docstring of this method for more information.
## [](#transformers.VisionTextDualEncoderModel)VisionTextDualEncoderModel
### class transformers.VisionTextDualEncoderModel
[](#transformers.VisionTextDualEncoderModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L162)
( config: typing.Optional\[transformers.models.vision\_text\_dual\_encoder.configuration\_vision\_text\_dual\_encoder.VisionTextDualEncoderConfig\] = Nonevision\_model: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = Nonetext\_model: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = None )
Parameters
- [](#transformers.VisionTextDualEncoderModel.config)**config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling.
In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval.
After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.VisionTextDualEncoderModel.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L293)
( input\_ids: typing.Optional\[torch.LongTensor\] = Nonepixel\_values: typing.Optional\[torch.FloatTensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonereturn\_loss: typing.Optional\[bool\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.clip.modeling_clip.CLIPOutput` or `tuple(torch.FloatTensor)`
The [VisionTextDualEncoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
```
>>> from PIL import Image
>>> import requests
>>> from transformers import (
... VisionTextDualEncoderModel,
... VisionTextDualEncoderProcessor,
... AutoImageProcessor,
... AutoTokenizer,
... )
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
>>> processor = VisionTextDualEncoderProcessor(image_processor, tokenizer)
>>> model = VisionTextDualEncoderModel.from_vision_text_pretrained(
... "google/vit-base-patch16-224", "bert-base-uncased"
... )
>>>
>>> urls = [
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... "https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg",
... ]
>>> images = [Image.open(requests.get(url, stream=True).raw) for url in urls]
>>> inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=images, return_tensors="pt", padding=True
... )
>>> outputs = model(
... input_ids=inputs.input_ids,
... attention_mask=inputs.attention_mask,
... pixel_values=inputs.pixel_values,
... return_loss=True,
... )
>>> loss, logits_per_image = outputs.loss, outputs.logits_per_image
>>>
>>> model.save_pretrained("vit-bert")
>>> model = VisionTextDualEncoderModel.from_pretrained("vit-bert")
>>>
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image
>>> probs = logits_per_image.softmax(dim=1) ```
## [](#transformers.FlaxVisionTextDualEncoderModel)FlaxVisionTextDualEncoderModel
### class transformers.FlaxVisionTextDualEncoderModel
[](#transformers.FlaxVisionTextDualEncoderModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py#L219)
( config: VisionTextDualEncoderConfiginput\_shape: typing.Optional\[typing.Tuple\] = Noneseed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs )
Parameters
- [](#transformers.FlaxVisionTextDualEncoderModel.config)**config** ([VisionTextDualEncoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights.
- [](#transformers.FlaxVisionTextDualEncoderModel.dtype)**dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`.
**Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.**
If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16).
This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling.
In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval.
After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
- [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit)
- [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation)
- [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap)
- [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
#### \_\_call\_\_
[](#transformers.FlaxVisionTextDualEncoderModel.__call__)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py#L269)
( input\_idspixel\_valuesattention\_mask = Noneposition\_ids = Nonetoken\_type\_ids = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput` or `tuple(torch.FloatTensor)`
The [FlaxVisionTextDualEncoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.FlaxVisionTextDualEncoderModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
```
>>> from PIL import Image
>>> import requests
>>> import jax
>>> from transformers import (
... FlaxVisionTextDualEncoderModel,
... VisionTextDualEncoderProcessor,
... AutoImageProcessor,
... AutoTokenizer,
... )
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> image_processor = AutoImageProcesor.from_pretrained("google/vit-base-patch16-224")
>>> processor = VisionTextDualEncoderProcessor(image_processor, tokenizer)
>>> model = FlaxVisionTextDualEncoderModel.from_vision_text_pretrained(
... "google/vit-base-patch16-224", "bert-base-uncased"
... )
>>>
>>> urls = [
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... "https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg",
... ]
>>> images = [Image.open(requests.get(url, stream=True).raw) for url in urls]
>>> inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=images, return_tensors="np", padding=True
... )
>>> outputs = model(
... input_ids=inputs.input_ids,
... attention_mask=inputs.attention_mask,
... pixel_values=inputs.pixel_values,
... )
>>> logits_per_image = outputs.logits_per_image
>>>
>>> model.save_pretrained("vit-bert")
>>> model = FlaxVisionTextDualEncoderModel.from_pretrained("vit-bert")
>>>
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image
>>> probs = jax.nn.softmax(logits_per_image, axis=1) ```
## [](#transformers.TFVisionTextDualEncoderModel)TFVisionTextDualEncoderModel
### class transformers.TFVisionTextDualEncoderModel
[](#transformers.TFVisionTextDualEncoderModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py#L176)
( \*args\*\*kwargs )
Parameters
- [](#transformers.TFVisionTextDualEncoderModel.config)**config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights.
This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling.
In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval.
After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from [TFPreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Keras [Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular Keras Model and refer to the TF documentation for all matter related to general usage and behavior.
#### call
[](#transformers.TFVisionTextDualEncoderModel.call)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py#L341)
( input\_ids: tf.Tensor | None = Nonepixel\_values: tf.Tensor | None = Noneattention\_mask: tf.Tensor | None = Noneposition\_ids: tf.Tensor | None = Nonereturn\_loss: Optional\[bool\] = Nonetoken\_type\_ids: tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: bool = False ) → `transformers.models.clip.modeling_tf_clip.TFCLIPOutput` or `tuple(tf.Tensor)`
The [TFVisionTextDualEncoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.TFVisionTextDualEncoderModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
```
>>> from PIL import Image
>>> import requests
>>> from transformers import (
... TFVisionTextDualEncoderModel,
... VisionTextDualEncoderProcessor,
... AutoImageProcessor,
... AutoTokenizer,
... )
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
>>> processor = VisionTextDualEncoderProcessor(image_processor, tokenizer)
>>> model = TFVisionTextDualEncoderModel.from_vision_text_pretrained(
... "google/vit-base-patch16-224", "bert-base-uncased"
... )
>>>
>>> urls = [
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... "https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg",
... ]
>>> images = [Image.open(requests.get(url, stream=True).raw) for url in urls]
>>> inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=images, return_tensors="np", padding=True
... )
>>> outputs = model(
... input_ids=inputs.input_ids,
... attention_mask=inputs.attention_mask,
... pixel_values=inputs.pixel_values,
... return_loss=True,
... )
>>> loss, logits_per_image = outputs.loss, outputs.logits_per_image
>>>
>>> model.save_pretrained("vit-bert")
>>> model = TFVisionTextDualEncoderModel.from_pretrained("vit-bert")
>>>
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image
>>> probs = tf.nn.softmax(logits_per_image, axis=1) ``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="VisionTextDualEncoder">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/model_doc/vision-text-dual-encoder">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>VisionTextDualEncoder</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"visiontextdualencoder","sections":[{"local":"overview","title":"Overview"},{"local":"transformers.VisionTextDualEncoderConfig","title":"VisionTextDualEncoderConfig"},{"local":"transformers.VisionTextDualEncoderProcessor","title":"VisionTextDualEncoderProcessor"},{"local":"transformers.VisionTextDualEncoderModel","title":"VisionTextDualEncoderModel"},{"local":"transformers.FlaxVisionTextDualEncoderModel","title":"FlaxVisionTextDualEncoderModel"},{"local":"transformers.TFVisionTextDualEncoderModel","title":"TFVisionTextDualEncoderModel"}],"title":"VisionTextDualEncoder"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":true,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","isExpanded":true,"id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"model_doc/vision-text-dual-encoder","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"VisionTextDualEncoder"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">VisionTextDualEncoder</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/align">ALIGN </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/altclip">AltCLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blip">BLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blip-2">BLIP-2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bridgetower">BridgeTower </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/chinese_clip">Chinese-CLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/clip">CLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/clipseg">CLIPSeg </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/data2vec">Data2Vec </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/deplot">DePlot </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/donut">Donut </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flava">FLAVA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/git">GIT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/groupvit">GroupViT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutlm">LayoutLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutlmv2">LayoutLMV2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutlmv3">LayoutLMV3 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutxlm">LayoutXLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/lilt">LiLT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/lxmert">LXMERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/matcha">MatCha </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mgp-str">MGP-STR </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/oneformer">OneFormer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/owlvit">OWL-ViT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/perceiver">Perceiver </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/pix2struct">Pix2Struct </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/sam">Segment Anything </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/speech-encoder-decoder">Speech Encoder Decoder Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/tapas">TAPAS </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/trocr">TrOCR </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/tvlt">TVLT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/vilt">ViLT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/vision-encoder-decoder">Vision Encoder Decoder Models </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/model_doc/vision-text-dual-encoder">Vision Text Dual Encoder </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/visual_bert">VisualBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xclip">X-CLIP </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="visiontextdualencoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#visiontextdualencoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>VisionTextDualEncoder</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Overview</span></h2> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel">VisionTextDualEncoderModel</a> can be used to initialize a vision-text dual encoder model with
any pretrained vision autoencoding model as the vision encoder (<em>e.g.</em> <a href="vit">ViT</a>, <a href="beit">BEiT</a>, <a href="deit">DeiT</a>) and any pretrained text autoencoding model as the text encoder (<em>e.g.</em> <a href="roberta">RoBERTa</a>, <a href="bert">BERT</a>). Two projection layers are added on top of both the vision and text encoder to project the output embeddings
to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a
downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text
training and then can be used for zero-shot vision tasks such image-classification or retrieval.</p> <p>In <a href="https://arxiv.org/abs/2111.07991" rel="nofollow">LiT: Zero-Shot Transfer with Locked-image Text Tuning</a> it is shown how
leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on
new zero-shot vision tasks such as image classification or retrieval.</p> <h2 class="relative group"><a id="transformers.VisionTextDualEncoderConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>VisionTextDualEncoderConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionTextDualEncoderConfig</span></span></h3> <a id="transformers.VisionTextDualEncoderConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py#L28" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projection_dim<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logit_scale_init_value<span class="opacity-60"> = 2.6592</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.text_config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.text_config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_config</strong> (<code>dict</code>) —
Dictionary of configuration options that defines text model config.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.vision_config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.vision_config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vision_config</strong> (<code>dict</code>) —
Dictionary of configuration options that defines vison model config.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.projection_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.projection_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projection_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 512) —
Dimentionality of text and vision projection layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.logit_scale_init_value" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.logit_scale_init_value"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logit_scale_init_value</strong> (<code>float</code>, <em>optional</em>, defaults to 2.6592) —
The inital value of the <em>logit_scale</em> paramter. Default is used as per the original CLIP implementation.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) —
Dictionary of keyword arguments.</span></span> </li></ul> </div></div> <p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a> is the configuration class to store the configuration of a
<a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel">VisionTextDualEncoderModel</a>. It is used to instantiate <a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel">VisionTextDualEncoderModel</a> model according to the
specified arguments, defining the text model and vision model configs.</p> <p>Configuration objects inherit from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the
documentation from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.VisionTextDualEncoderConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> ViTConfig, BertConfig, VisionTextDualEncoderConfig, VisionTextDualEncoderModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a BERT and ViT configuration</span>
<span class="hljs-meta">>>> </span>config_vision = ViTConfig()
<span class="hljs-meta">>>> </span>config_text = BertConfig()
<span class="hljs-meta">>>> </span>config = VisionTextDualEncoderConfig.from_vision_text_configs(config_vision, config_text, projection_dim=<span class="hljs-number">512</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a BERT and ViT model (with random weights)</span>
<span class="hljs-meta">>>> </span>model = VisionTextDualEncoderModel(config=config)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Accessing the model configuration</span>
<span class="hljs-meta">>>> </span>config_vision = model.config.vision_config
<span class="hljs-meta">>>> </span>config_text = model.config.text_config
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Saving the model, including its configuration</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"vit-bert"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># loading model and config from pretrained folder</span>
<span class="hljs-meta">>>> </span>vision_text_config = VisionTextDualEncoderConfig.from_pretrained(<span class="hljs-string">"vit-bert"</span>)
<span class="hljs-meta">>>> </span>model = VisionTextDualEncoderModel.from_pretrained(<span class="hljs-string">"vit-bert"</span>, config=vision_text_config)</pre></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderConfig.from_vision_text_configs"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_vision_text_configs</span></h4> <a id="transformers.VisionTextDualEncoderConfig.from_vision_text_configs" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderConfig.from_vision_text_configs"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py#L105" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vision_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.VisionTextDualEncoderConfig.from_vision_text_configs.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>An instance of a configuration object</p>
</p> </div></div> <p>Instantiate a <a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a> (or a derived class) from text model configuration and vision
model configuration.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderConfig.to_dict"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>to_dict</span></h4> <a id="transformers.VisionTextDualEncoderConfig.to_dict" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderConfig.to_dict"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py#L117" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>Dict[str, any]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.VisionTextDualEncoderConfig.to_dict.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>Dict[str, any]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>Dictionary of all the attributes that make up this configuration instance,</p>
</p> </div></div> <p>Serializes this instance to a Python dictionary. Override the default <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig.to_dict">to_dict()</a>.</p></div></div> <h2 class="relative group"><a id="transformers.VisionTextDualEncoderProcessor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderProcessor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>VisionTextDualEncoderProcessor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderProcessor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionTextDualEncoderProcessor</span></span></h3> <a id="transformers.VisionTextDualEncoderProcessor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderProcessor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py#L25" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_processor<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderProcessor.image_processor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderProcessor.image_processor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_processor</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>) —
The image processor is a required input.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderProcessor.tokenizer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderProcessor.tokenizer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenizer</strong> (<a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>) —
The tokenizer is a required input.</span></span> </li></ul> </div></div> <p>Constructs a VisionTextDualEncoder processor which wraps an image processor and a tokenizer into a single
processor.</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderProcessor">VisionTextDualEncoderProcessor</a> offers all the functionalities of <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a> and <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>.
See the <code>__call__()</code> and <a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderProcessor.decode">decode()</a> for more
information.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderProcessor.batch_decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_decode</span></h4> <a id="transformers.VisionTextDualEncoderProcessor.batch_decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderProcessor.batch_decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py#L115" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p>This method forwards all its arguments to VisionTextDualEncoderTokenizer’s
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode">batch_decode()</a>. Please refer to the docstring of this method for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderProcessor.decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>decode</span></h4> <a id="transformers.VisionTextDualEncoderProcessor.decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderProcessor.decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py#L122" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p>This method forwards all its arguments to VisionTextDualEncoderTokenizer’s <a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode">decode()</a>.
Please refer to the docstring of this method for more information.</p></div></div> <h2 class="relative group"><a id="transformers.VisionTextDualEncoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>VisionTextDualEncoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionTextDualEncoderModel</span></span></h3> <a id="transformers.VisionTextDualEncoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L162" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: typing.Optional[transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vision_model<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_model<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model
as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded
via the <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> method. The projection layers are automatically added to the model and
should be fine-tuned on a downstream task, like contrastive image-text modeling.</p> <p>In <a href="https://arxiv.org/abs/2111.07991" rel="nofollow">LiT: Zero-Shot Transfer with Locked-image Text Tuning</a> it is shown how
leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment
on new zero-shot vision tasks such as image classification or retrieval.</p> <p>After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other
models (see the examples for more information).</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.VisionTextDualEncoderModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L293" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_loss<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.clip.modeling_clip.CLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 8 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p>
<p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
an image processor (e.g. if you use ViT as the encoder, you should use <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>). See
<a href="/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2FeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.return_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.return_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_loss</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.VisionTextDualEncoderModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>transformers.models.clip.modeling_clip.CLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <code>transformers.models.clip.modeling_clip.CLIPOutput</code> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a>) and inputs.</p>
<ul>
<li><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>return_loss</code> is <code>True</code>) — Contrastive loss for image-text similarity.</li>
<li><strong>logits_per_image:(<code>torch.FloatTensor</code></strong> of shape <code>(image_batch_size, text_batch_size)</code>) — The scaled dot product scores between <code>image_embeds</code> and <code>text_embeds</code>. This represents the image-text
similarity scores.</li>
<li><strong>logits_per_text:(<code>torch.FloatTensor</code></strong> of shape <code>(text_batch_size, image_batch_size)</code>) — The scaled dot product scores between <code>text_embeds</code> and <code>image_embeds</code>. This represents the text-image
similarity scores.</li>
<li><strong>text_embeds(<code>torch.FloatTensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The text embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.CLIPTextModel">CLIPTextModel</a>.</li>
<li><strong>image_embeds(<code>torch.FloatTensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The image embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.CLIPVisionModel">CLIPVisionModel</a>.</li>
<li><strong>text_model_output(<code>BaseModelOutputWithPooling</code>):</strong>
The output of the <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.CLIPTextModel">CLIPTextModel</a>.</li>
<li><strong>vision_model_output(<code>BaseModelOutputWithPooling</code>):</strong>
The output of the <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.CLIPVisionModel">CLIPVisionModel</a>.</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel">VisionTextDualEncoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.VisionTextDualEncoderModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> (
<span class="hljs-meta">... </span> VisionTextDualEncoderModel,
<span class="hljs-meta">... </span> VisionTextDualEncoderProcessor,
<span class="hljs-meta">... </span> AutoImageProcessor,
<span class="hljs-meta">... </span> AutoTokenizer,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)
<span class="hljs-meta">>>> </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224"</span>)
<span class="hljs-meta">>>> </span>processor = VisionTextDualEncoderProcessor(image_processor, tokenizer)
<span class="hljs-meta">>>> </span>model = VisionTextDualEncoderModel.from_vision_text_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224"</span>, <span class="hljs-string">"bert-base-uncased"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># contrastive training</span>
<span class="hljs-meta">>>> </span>urls = [
<span class="hljs-meta">... </span> <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>,
<span class="hljs-meta">... </span> <span class="hljs-string">"https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg"</span>,
<span class="hljs-meta">... </span>]
<span class="hljs-meta">>>> </span>images = [Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-keyword">for</span> url <span class="hljs-keyword">in</span> urls]
<span class="hljs-meta">>>> </span>inputs = processor(
<span class="hljs-meta">... </span> text=[<span class="hljs-string">"a photo of a cat"</span>, <span class="hljs-string">"a photo of a dog"</span>], images=images, return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>outputs = model(
<span class="hljs-meta">... </span> input_ids=inputs.input_ids,
<span class="hljs-meta">... </span> attention_mask=inputs.attention_mask,
<span class="hljs-meta">... </span> pixel_values=inputs.pixel_values,
<span class="hljs-meta">... </span> return_loss=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>loss, logits_per_image = outputs.loss, outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span>
<span class="hljs-meta">>>> </span><span class="hljs-comment"># save and load from pretrained</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"vit-bert"</span>)
<span class="hljs-meta">>>> </span>model = VisionTextDualEncoderModel.from_pretrained(<span class="hljs-string">"vit-bert"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># inference</span>
<span class="hljs-meta">>>> </span>outputs = model(**inputs)
<span class="hljs-meta">>>> </span>logits_per_image = outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span>
<span class="hljs-meta">>>> </span>probs = logits_per_image.softmax(dim=<span class="hljs-number">1</span>) <span class="hljs-comment"># we can take the softmax to get the label probabilities</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxVisionTextDualEncoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>FlaxVisionTextDualEncoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionTextDualEncoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxVisionTextDualEncoderModel</span></span></h3> <a id="transformers.FlaxVisionTextDualEncoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionTextDualEncoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py#L219" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: VisionTextDualEncoderConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Optional[typing.Tuple] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = <class 'jax.numpy.float32'></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) —
The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and
<code>jax.numpy.bfloat16</code> (on TPUs).<p></p>
<p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given <code>dtype</code>.</p>
<p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.</strong></p>
<p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p>This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model
as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded
via the <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> method. The projection layers are automatically added to the model and
should be fine-tuned on a downstream task, like contrastive image-text modeling.</p> <p>In <a href="https://arxiv.org/abs/2111.07991" rel="nofollow">LiT: Zero-Shot Transfer with Locked-image Text Tuning</a> it is shown how
leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment
on new zero-shot vision tasks such as image classification or retrieval.</p> <p>After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other
models (see the examples for more information).</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/flax.linen.html#module" rel="nofollow">flax.linen.Module</a>
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.</p> <p>Finally, this model supports inherent JAX features such as:</p> <ul><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionTextDualEncoderModel.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxVisionTextDualEncoderModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionTextDualEncoderModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py#L269" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p>
<p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
an image processor (e.g. if you use ViT as the encoder, you should use <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>). See
<a href="/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2FeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxVisionTextDualEncoderModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <code>transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput</code> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a>) and inputs.</p>
<ul>
<li><strong>logits_per_image:(<code>jnp.ndarray</code></strong> of shape <code>(image_batch_size, text_batch_size)</code>) — The scaled dot product scores between <code>image_embeds</code> and <code>text_embeds</code>. This represents the image-text
similarity scores.</li>
<li><strong>logits_per_text:(<code>jnp.ndarray</code></strong> of shape <code>(text_batch_size, image_batch_size)</code>) — The scaled dot product scores between <code>text_embeds</code> and <code>image_embeds</code>. This represents the text-image
similarity scores.</li>
<li><strong>text_embeds(<code>jnp.ndarray</code></strong> of shape <code>(batch_size, output_dim</code>) — The text embeddings obtained by applying the projection layer to the pooled output of
<a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.FlaxCLIPTextModel">FlaxCLIPTextModel</a>.</li>
<li><strong>image_embeds(<code>jnp.ndarray</code></strong> of shape <code>(batch_size, output_dim</code>) — The image embeddings obtained by applying the projection layer to the pooled output of
<a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.FlaxCLIPVisionModel">FlaxCLIPVisionModel</a>.</li>
<li><strong>text_model_output(<code>FlaxBaseModelOutputWithPooling</code>):</strong>
The output of the <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.FlaxCLIPTextModel">FlaxCLIPTextModel</a>.</li>
<li><strong>vision_model_output(<code>FlaxBaseModelOutputWithPooling</code>):</strong>
The output of the <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.FlaxCLIPVisionModel">FlaxCLIPVisionModel</a>.</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.FlaxVisionTextDualEncoderModel">FlaxVisionTextDualEncoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> jax
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> (
<span class="hljs-meta">... </span> FlaxVisionTextDualEncoderModel,
<span class="hljs-meta">... </span> VisionTextDualEncoderProcessor,
<span class="hljs-meta">... </span> AutoImageProcessor,
<span class="hljs-meta">... </span> AutoTokenizer,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)
<span class="hljs-meta">>>> </span>image_processor = AutoImageProcesor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224"</span>)
<span class="hljs-meta">>>> </span>processor = VisionTextDualEncoderProcessor(image_processor, tokenizer)
<span class="hljs-meta">>>> </span>model = FlaxVisionTextDualEncoderModel.from_vision_text_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224"</span>, <span class="hljs-string">"bert-base-uncased"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># contrastive training</span>
<span class="hljs-meta">>>> </span>urls = [
<span class="hljs-meta">... </span> <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>,
<span class="hljs-meta">... </span> <span class="hljs-string">"https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg"</span>,
<span class="hljs-meta">... </span>]
<span class="hljs-meta">>>> </span>images = [Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-keyword">for</span> url <span class="hljs-keyword">in</span> urls]
<span class="hljs-meta">>>> </span>inputs = processor(
<span class="hljs-meta">... </span> text=[<span class="hljs-string">"a photo of a cat"</span>, <span class="hljs-string">"a photo of a dog"</span>], images=images, return_tensors=<span class="hljs-string">"np"</span>, padding=<span class="hljs-literal">True</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>outputs = model(
<span class="hljs-meta">... </span> input_ids=inputs.input_ids,
<span class="hljs-meta">... </span> attention_mask=inputs.attention_mask,
<span class="hljs-meta">... </span> pixel_values=inputs.pixel_values,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>logits_per_image = outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span>
<span class="hljs-meta">>>> </span><span class="hljs-comment"># save and load from pretrained</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"vit-bert"</span>)
<span class="hljs-meta">>>> </span>model = FlaxVisionTextDualEncoderModel.from_pretrained(<span class="hljs-string">"vit-bert"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># inference</span>
<span class="hljs-meta">>>> </span>outputs = model(**inputs)
<span class="hljs-meta">>>> </span>logits_per_image = outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span>
<span class="hljs-meta">>>> </span>probs = jax.nn.softmax(logits_per_image, axis=<span class="hljs-number">1</span>) <span class="hljs-comment"># we can take the softmax to get the label probabilities</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFVisionTextDualEncoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>TFVisionTextDualEncoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionTextDualEncoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFVisionTextDualEncoderModel</span></span></h3> <a id="transformers.TFVisionTextDualEncoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionTextDualEncoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py#L176" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model
as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded
via the <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> method. The projection layers are automatically added to the model and
should be fine-tuned on a downstream task, like contrastive image-text modeling.</p> <p>In <a href="https://arxiv.org/abs/2111.07991" rel="nofollow">LiT: Zero-Shot Transfer with Locked-image Text Tuning</a> it is shown how
leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment
on new zero-shot vision tasks such as image classification or retrieval.</p> <p>After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other
models (see the examples for more information).</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a Keras <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">Model</a> subclass. Use it as a
regular Keras Model and refer to the TF documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionTextDualEncoderModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFVisionTextDualEncoderModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionTextDualEncoderModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py#L341" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_loss<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.clip.modeling_tf_clip.TFCLIPOutput</code> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 8 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p>
<p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
an image processor (e.g. if you use ViT as the encoder, you should use <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>). See
<a href="/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2FeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.return_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.return_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_loss</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.TFVisionTextDualEncoderModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>transformers.models.clip.modeling_tf_clip.TFCLIPOutput</code> or <code>tuple(tf.Tensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <code>transformers.models.clip.modeling_tf_clip.TFCLIPOutput</code> or a tuple of <code>tf.Tensor</code> (if
<code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the
configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a>) and inputs.</p>
<ul>
<li><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>return_loss</code> is <code>True</code>) — Contrastive loss for image-text similarity.</li>
<li><strong>logits_per_image:(<code>tf.Tensor</code></strong> of shape <code>(image_batch_size, text_batch_size)</code>) — The scaled dot product scores between <code>image_embeds</code> and <code>text_embeds</code>. This represents the image-text
similarity scores.</li>
<li><strong>logits_per_text:(<code>tf.Tensor</code></strong> of shape <code>(text_batch_size, image_batch_size)</code>) — The scaled dot product scores between <code>text_embeds</code> and <code>image_embeds</code>. This represents the text-image
similarity scores.</li>
<li><strong>text_embeds(<code>tf.Tensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The text embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.TFCLIPTextModel">TFCLIPTextModel</a>.</li>
<li><strong>image_embeds(<code>tf.Tensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The image embeddings obtained by applying the projection layer to the pooled output of
<a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.TFCLIPVisionModel">TFCLIPVisionModel</a>.</li>
<li><strong>text_model_output(<code>~modeling_tf_utils.TFBaseModelOutputWithPooling</code>):</strong>
The output of the <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.TFCLIPTextModel">TFCLIPTextModel</a>.</li>
<li><strong>vision_model_output(<code>~modeling_tf_utils.TFBaseModelOutputWithPooling</code>):</strong>
The output of the <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.TFCLIPVisionModel">TFCLIPVisionModel</a>.</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/vision-text-dual-encoder#transformers.TFVisionTextDualEncoderModel">TFVisionTextDualEncoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFVisionTextDualEncoderModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> (
<span class="hljs-meta">... </span> TFVisionTextDualEncoderModel,
<span class="hljs-meta">... </span> VisionTextDualEncoderProcessor,
<span class="hljs-meta">... </span> AutoImageProcessor,
<span class="hljs-meta">... </span> AutoTokenizer,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)
<span class="hljs-meta">>>> </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224"</span>)
<span class="hljs-meta">>>> </span>processor = VisionTextDualEncoderProcessor(image_processor, tokenizer)
<span class="hljs-meta">>>> </span>model = TFVisionTextDualEncoderModel.from_vision_text_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224"</span>, <span class="hljs-string">"bert-base-uncased"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># contrastive training</span>
<span class="hljs-meta">>>> </span>urls = [
<span class="hljs-meta">... </span> <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>,
<span class="hljs-meta">... </span> <span class="hljs-string">"https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg"</span>,
<span class="hljs-meta">... </span>]
<span class="hljs-meta">>>> </span>images = [Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-keyword">for</span> url <span class="hljs-keyword">in</span> urls]
<span class="hljs-meta">>>> </span>inputs = processor(
<span class="hljs-meta">... </span> text=[<span class="hljs-string">"a photo of a cat"</span>, <span class="hljs-string">"a photo of a dog"</span>], images=images, return_tensors=<span class="hljs-string">"np"</span>, padding=<span class="hljs-literal">True</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>outputs = model(
<span class="hljs-meta">... </span> input_ids=inputs.input_ids,
<span class="hljs-meta">... </span> attention_mask=inputs.attention_mask,
<span class="hljs-meta">... </span> pixel_values=inputs.pixel_values,
<span class="hljs-meta">... </span> return_loss=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>loss, logits_per_image = outputs.loss, outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span>
<span class="hljs-meta">>>> </span><span class="hljs-comment"># save and load from pretrained</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"vit-bert"</span>)
<span class="hljs-meta">>>> </span>model = TFVisionTextDualEncoderModel.from_pretrained(<span class="hljs-string">"vit-bert"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># inference</span>
<span class="hljs-meta">>>> </span>outputs = model(**inputs)
<span class="hljs-meta">>>> </span>logits_per_image = outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span>
<span class="hljs-meta">>>> </span>probs = tf.nn.softmax(logits_per_image, axis=<span class="hljs-number">1</span>) <span class="hljs-comment"># we can take the softmax to get the label probabilities</span></pre></div></div></div></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/model_doc/vision-encoder-decoder" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Vision Encoder Decoder Models</a>
<a href="/docs/transformers/model_doc/visual_bert" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">VisualBERT<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"VisionTextDualEncoder","isExpanded":true,"id":"visiontextdualencoder","url":"#visiontextdualencoder","sections":[{"title":"Overview","isExpanded":true,"id":"overview","url":"#overview"},{"title":"VisionTextDualEncoderConfig","isExpanded":true,"id":"transformers.VisionTextDualEncoderConfig","url":"#transformers.VisionTextDualEncoderConfig"},{"title":"VisionTextDualEncoderProcessor","isExpanded":true,"id":"transformers.VisionTextDualEncoderProcessor","url":"#transformers.VisionTextDualEncoderProcessor"},{"title":"VisionTextDualEncoderModel","isExpanded":true,"id":"transformers.VisionTextDualEncoderModel","url":"#transformers.VisionTextDualEncoderModel"},{"title":"FlaxVisionTextDualEncoderModel","isExpanded":true,"id":"transformers.FlaxVisionTextDualEncoderModel","url":"#transformers.FlaxVisionTextDualEncoderModel"},{"title":"TFVisionTextDualEncoderModel","isExpanded":true,"id":"transformers.TFVisionTextDualEncoderModel","url":"#transformers.TFVisionTextDualEncoderModel"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#visiontextdualencoder" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-visiontextdualencoder"><wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#transformers.VisionTextDualEncoderConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionTextDualEncoderConfig"><wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Config</a> <a href="#transformers.VisionTextDualEncoderProcessor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionTextDualEncoderProcessor"><wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Processor</a> <a href="#transformers.VisionTextDualEncoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionTextDualEncoderModel"><wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Model</a> <a href="#transformers.FlaxVisionTextDualEncoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxVisionTextDualEncoderModel"><wbr>Flax<wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Model</a> <a href="#transformers.TFVisionTextDualEncoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFVisionTextDualEncoderModel">TF<wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Model</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/model_doc/vision-text-dual-encoder" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/model_doc/vision-text-dual-encoder");
}
</script>
<iframe name="__privateStripeMetricsController2260" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fmodel_doc%2Fvision-text-dual-encoder&title=VisionTextDualEncoder&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:01.001Z |
https://huggingface.co/docs/transformers/tasks/video-classification | The documentation page TASKS/VIDEO-CLASSIFICATION doesn’t exist in v4.30.0, but exists on the main version. Click [here](/docs/transformers/main/en/tasks/video-classification) to redirect to the main version of the documentation. | <html><head></head><body>The documentation page TASKS/VIDEO-CLASSIFICATION doesn’t exist in v4.30.0, but exists on the main version. Click <a href="/docs/transformers/main/en/tasks/video-classification">here</a> to redirect to the main version of the documentation.</body></html> | 2023-06-27T19:55:01.055Z |
|
https://huggingface.co/docs/transformers/model_doc/data2vec-text | The documentation page MODEL\_DOC/DATA2VEC-TEXT doesn’t exist in v4.30.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/data2vec-text) to redirect to the main version of the documentation. | <html><head></head><body>The documentation page MODEL_DOC/DATA2VEC-TEXT doesn’t exist in v4.30.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/data2vec-text">here</a> to redirect to the main version of the documentation.</body></html> | 2023-06-27T19:55:01.063Z |
|
Vision Encoder Decoder Models | https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder | ## [](#overview)Overview
The [VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) can be used to initialize an image-to-text model with any pretrained Transformer-based vision model as the encoder (_e.g._ [ViT](vit), [BEiT](beit), [DeiT](deit), [Swin](swin)) and any pretrained language model as the decoder (_e.g._ [RoBERTa](roberta), [GPT2](gpt2), [BERT](bert), [DistilBERT](distilbert)).
The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
After such a [VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below for more information).
An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates the caption. Another example is optical character recognition. Refer to [TrOCR](trocr), which is an instance of [VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel).
## [](#randomly-initializing-visionencoderdecodermodel-from-model-configurations)Randomly initializing `VisionEncoderDecoderModel` from model configurations.
[VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [ViTModel](/docs/transformers/v4.30.0/en/model_doc/vit#transformers.ViTModel) configuration for the encoder and the default `BertForCausalLM` configuration for the decoder.
```
>>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel
>>> config_encoder = ViTConfig()
>>> config_decoder = BertConfig()
>>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
>>> model = VisionEncoderDecoderModel(config=config)```
## [](#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder)Initialising `VisionEncoderDecoderModel` from a pretrained encoder and a pretrained decoder.
[VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, _e.g._ [Swin](swin), can serve as the encoder and both pretrained auto-encoding models, _e.g._ BERT, pretrained causal language models, _e.g._ GPT2, as well as the pretrained decoder part of sequence-to-sequence models, _e.g._ decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the _Warm-starting-encoder-decoder blog post_](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `VisionEncoderDecoderModel` class provides a [VisionEncoderDecoderModel.from\_encoder\_decoder\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained) method.
```
>>> from transformers import VisionEncoderDecoderModel
>>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
... "microsoft/swin-base-patch4-window7-224-in22k", "bert-base-uncased"
... )```
## [](#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference)Loading an existing `VisionEncoderDecoderModel` checkpoint and perform inference.
To load fine-tuned checkpoints of the `VisionEncoderDecoderModel` class, [VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) provides the `from_pretrained(...)` method just like any other model architecture in Transformers.
To perform inference, one uses the `generate` method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
```
>>> import requests
>>> from PIL import Image
>>> from transformers import GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel
>>>
>>> model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> tokenizer = GPT2TokenizerFast.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> image_processor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>>
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> pixel_values = image_processor(image, return_tensors="pt").pixel_values
>>>
>>> generated_ids = model.generate(pixel_values)
>>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
>>> print(generated_text)
a cat laying on a blanket next to a cat laying on a bed```
## [](#loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel)Loading a PyTorch checkpoint into `TFVisionEncoderDecoderModel`.
`TFVisionEncoderDecoderModel.from_pretrained()` currently doesn’t support initializing the model from a PyTorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only PyTorch checkpoints for a particular vision encoder-decoder model, a workaround is:
```
>>> from transformers import VisionEncoderDecoderModel, TFVisionEncoderDecoderModel
>>> _model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> _model.encoder.save_pretrained("./encoder")
>>> _model.decoder.save_pretrained("./decoder")
>>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
... "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True
... )
>>>
>>> model.config = _model.config```
## [](#training)Training
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: `pixel_values` (which are the images) and `labels` (which are the `input_ids` of the encoded target sequence).
```
>>> from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel
>>> from datasets import load_dataset
>>> image_processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
>>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
... "google/vit-base-patch16-224-in21k", "bert-base-uncased"
... )
>>> model.config.decoder_start_token_id = tokenizer.cls_token_id
>>> model.config.pad_token_id = tokenizer.pad_token_id
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> pixel_values = image_processor(image, return_tensors="pt").pixel_values
>>> labels = tokenizer(
... "an image of two cats chilling on a couch",
... return_tensors="pt",
... ).input_ids
>>>
>>> loss = model(pixel_values=pixel_values, labels=labels).loss```
This model was contributed by [nielsr](https://github.com/nielsrogge). This model’s TensorFlow and Flax versions were contributed by [ydshieh](https://github.com/ydshieh).
## [](#transformers.VisionEncoderDecoderConfig)VisionEncoderDecoderConfig
### class transformers.VisionEncoderDecoderConfig
[](#transformers.VisionEncoderDecoderConfig)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L34)
( \*\*kwargs )
Parameters
- [](#transformers.VisionEncoderDecoderConfig.kwargs)**kwargs** (_optional_) — Dictionary of keyword arguments. Notably:
- **encoder** ([PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig), _optional_) — An instance of a configuration object that defines the encoder config.
- **decoder** ([PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig), _optional_) — An instance of a configuration object that defines the decoder config.
[VisionEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig) is the configuration class to store the configuration of a [VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel). It is used to instantiate a Vision-Encoder-Text-Decoder model according to the specified arguments, defining the encoder and decoder configs.
Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information.
Examples:
```
>>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel
>>>
>>> config_encoder = ViTConfig()
>>> config_decoder = BertConfig()
>>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
>>>
>>> model = VisionEncoderDecoderModel(config=config)
>>>
>>> config_encoder = model.config.encoder
>>> config_decoder = model.config.decoder
>>>
>>> config_decoder.is_decoder = True
>>> config_decoder.add_cross_attention = True
>>>
>>> model.save_pretrained("my-model")
>>>
>>> encoder_decoder_config = VisionEncoderDecoderConfig.from_pretrained("my-model")
>>> model = VisionEncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config)```
#### from\_encoder\_decoder\_configs
[](#transformers.VisionEncoderDecoderConfig.from_encoder_decoder_configs)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L100)
( encoder\_config: PretrainedConfigdecoder\_config: PretrainedConfig\*\*kwargs ) → [VisionEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)
An instance of a configuration object
Instantiate a [VisionEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig) (or a derived class) from a pre-trained encoder model configuration and decoder model configuration.
#### to\_dict
[](#transformers.VisionEncoderDecoderConfig.to_dict)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L117)
( ) → `Dict[str, any]`
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default _to\_dict()_ from _PretrainedConfig_.
## [](#transformers.VisionEncoderDecoderModel)VisionEncoderDecoderModel
### class transformers.VisionEncoderDecoderModel
[](#transformers.VisionEncoderDecoderModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L151)
( config: typing.Optional\[transformers.configuration\_utils.PretrainedConfig\] = Noneencoder: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = Nonedecoder: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = None )
Parameters
- [](#transformers.VisionEncoderDecoderModel.config)**config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.
After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
[VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one as decoder when created with the :meth_~transformers.AutoModel.from\_pretrained_ class method for the encoder and :meth_~transformers.AutoModelForCausalLM.from\_pretrained_ class method for the decoder.
#### forward
[](#transformers.VisionEncoderDecoderModel.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L519)
( pixel\_values: typing.Optional\[torch.FloatTensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.LongTensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Nonedecoder\_inputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.modeling\_outputs.Seq2SeqLMOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput) or `tuple(torch.FloatTensor)`
The [VisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
```
>>> from transformers import AutoProcessor, VisionEncoderDecoderModel
>>> import requests
>>> from PIL import Image
>>> import torch
>>> processor = AutoProcessor.from_pretrained("microsoft/trocr-base-handwritten")
>>> model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
>>>
>>> url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
>>>
>>> model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
>>> model.config.pad_token_id = processor.tokenizer.pad_token_id
>>> model.config.vocab_size = model.config.decoder.vocab_size
>>> pixel_values = processor(image, return_tensors="pt").pixel_values
>>> text = "hello world"
>>> labels = processor.tokenizer(text, return_tensors="pt").input_ids
>>> outputs = model(pixel_values=pixel_values, labels=labels)
>>> loss = outputs.loss
>>>
>>> generated_ids = model.generate(pixel_values)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]```
#### from\_encoder\_decoder\_pretrained
[](#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L365)
( encoder\_pretrained\_model\_name\_or\_path: str = Nonedecoder\_pretrained\_model\_name\_or\_path: str = None\*model\_args\*\*kwargs )
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train the model, you need to first set it back in training mode with `model.train()`.
Example:
```
>>> from transformers import VisionEncoderDecoderModel
>>>
>>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
... "google/vit-base-patch16-224-in21k", "bert-base-uncased"
... )
>>>
>>> model.save_pretrained("./vit-bert")
>>>
>>> model = VisionEncoderDecoderModel.from_pretrained("./vit-bert")```
## [](#transformers.TFVisionEncoderDecoderModel)TFVisionEncoderDecoderModel
### class transformers.TFVisionEncoderDecoderModel
[](#transformers.TFVisionEncoderDecoderModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L176)
( \*args\*\*kwargs )
Parameters
- [](#transformers.TFVisionEncoderDecoderModel.config)**config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights.
This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.
After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from [TFPreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
[TFVisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.TFVisionEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one of the base model classes as decoder when created with the [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) class method for the encoder and [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) class method for the decoder.
#### call
[](#transformers.TFVisionEncoderDecoderModel.call)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L486)
( pixel\_values: np.ndarray | tf.Tensor | None = Nonedecoder\_input\_ids: np.ndarray | tf.Tensor | None = Nonedecoder\_attention\_mask: np.ndarray | tf.Tensor | None = Noneencoder\_outputs: Optional\[Union\[Tuple, TFBaseModelOutput\]\] = Nonepast\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Nonedecoder\_inputs\_embeds: np.ndarray | tf.Tensor | None = Nonelabels: np.ndarray | tf.Tensor | None = Noneuse\_cache: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: bool = False\*\*kwargs ) → [transformers.modeling\_tf\_outputs.TFSeq2SeqLMOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput) or `tuple(tf.Tensor)`
The [TFVisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.TFVisionEncoderDecoderModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
```
>>> from transformers import AutoImageProcessor, AutoTokenizer, TFVisionEncoderDecoderModel
>>> from PIL import Image
>>> import requests
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
>>> decoder_tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>>
>>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
... "google/vit-base-patch16-224-in21k", "gpt2"
... )
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> img = Image.open(requests.get(url, stream=True).raw)
>>>
>>> pixel_values = image_processor(images=img, return_tensors="tf").pixel_values
>>> decoder_input_ids = decoder_tokenizer("Linda Davis", return_tensors="tf").input_ids
>>> outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids)
>>>
>>> outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids, labels=decoder_input_ids)
>>> loss, logits = outputs.loss, outputs.logits
>>>
>>> model.save_pretrained("vit-gpt2")
>>> model = TFVisionEncoderDecoderModel.from_pretrained("vit-gpt2")
>>>
>>> generated = model.generate(pixel_values, decoder_start_token_id=model.config.decoder.bos_token_id)```
#### from\_encoder\_decoder\_pretrained
[](#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L338)
( encoder\_pretrained\_model\_name\_or\_path: str = Nonedecoder\_pretrained\_model\_name\_or\_path: str = None\*model\_args\*\*kwargs )
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
Example:
```
>>> from transformers import TFVisionEncoderDecoderModel
>>>
>>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
... "google/vit-base-patch16-224-in21k", "bert-base-uncased"
... )
>>>
>>> model.save_pretrained("./vit-bert")
>>>
>>> model = TFVisionEncoderDecoderModel.from_pretrained("./vit-bert")```
## [](#transformers.FlaxVisionEncoderDecoderModel)FlaxVisionEncoderDecoderModel
### class transformers.FlaxVisionEncoderDecoderModel
[](#transformers.FlaxVisionEncoderDecoderModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L268)
( config: VisionEncoderDecoderConfiginput\_shape: typing.Optional\[typing.Tuple\] = Noneseed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs )
Parameters
- [](#transformers.FlaxVisionEncoderDecoderModel.config)**config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights.
- [](#transformers.FlaxVisionEncoderDecoderModel.dtype)**dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`.
**Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.**
If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16).
This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.
After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
[FlaxVisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.FlaxVisionEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base vision model classes of the library as encoder module and another one as decoder module when created with the :meth_~transformers.FlaxAutoModel.from\_pretrained_ class method for the encoder and :meth_~transformers.FlaxAutoModelForCausalLM.from\_pretrained_ class method for the decoder.
#### \_\_call\_\_
[](#transformers.FlaxVisionEncoderDecoderModel.__call__)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L598)
( pixel\_values: ndarraydecoder\_input\_ids: typing.Optional\[jax.\_src.numpy.ndarray.ndarray\] = Nonedecoder\_attention\_mask: typing.Optional\[jax.\_src.numpy.ndarray.ndarray\] = Nonedecoder\_position\_ids: typing.Optional\[jax.\_src.numpy.ndarray.ndarray\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonetrain: bool = Falseparams: dict = Nonedropout\_rng: PRNGKey = None ) → [transformers.modeling\_flax\_outputs.FlaxSeq2SeqLMOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput) or `tuple(torch.FloatTensor)`
The [FlaxVisionEncoderDecoderModel](/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.FlaxVisionEncoderDecoderModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
```
>>> from transformers import FlaxVisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
>>>
>>> tokenizer_output = AutoTokenizer.from_pretrained("gpt2")
>>>
>>> model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
... "google/vit-base-patch16-224-in21k", "gpt2"
... )
>>> pixel_values = image_processor(images=image, return_tensors="np").pixel_values
>>>
>>> model.config.eos_token_id = model.config.decoder.eos_token_id
>>> model.config.pad_token_id = model.config.eos_token_id
>>>
>>> sequences = model.generate(pixel_values, num_beams=4, max_length=12).sequences
>>> captions = tokenizer_output.batch_decode(sequences, skip_special_tokens=True)```
#### from\_encoder\_decoder\_pretrained
[](#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L723)
( encoder\_pretrained\_model\_name\_or\_path: typing.Union\[str, os.PathLike, NoneType\] = Nonedecoder\_pretrained\_model\_name\_or\_path: typing.Union\[str, os.PathLike, NoneType\] = None\*model\_args\*\*kwargs )
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
Example:
```
>>> from transformers import FlaxVisionEncoderDecoderModel
>>>
>>> model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
... "google/vit-base-patch16-224-in21k", "gpt2"
... )
>>>
>>> model.save_pretrained("./vit-gpt2")
>>>
>>> model = FlaxVisionEncoderDecoderModel.from_pretrained("./vit-gpt2")``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Vision Encoder Decoder Models">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Vision Encoder Decoder Models</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"vision-encoder-decoder-models","sections":[{"local":"overview","title":"Overview"},{"local":"randomly-initializing-visionencoderdecodermodel-from-model-configurations","title":"Randomly initializing `VisionEncoderDecoderModel` from model configurations."},{"local":"initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder","title":"Initialising `VisionEncoderDecoderModel` from a pretrained encoder and a pretrained decoder."},{"local":"loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference","title":"Loading an existing `VisionEncoderDecoderModel` checkpoint and perform inference."},{"local":"loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel","title":"Loading a PyTorch checkpoint into `TFVisionEncoderDecoderModel`."},{"local":"training","title":"Training"},{"local":"transformers.VisionEncoderDecoderConfig","title":"VisionEncoderDecoderConfig"},{"local":"transformers.VisionEncoderDecoderModel","title":"VisionEncoderDecoderModel"},{"local":"transformers.TFVisionEncoderDecoderModel","title":"TFVisionEncoderDecoderModel"},{"local":"transformers.FlaxVisionEncoderDecoderModel","title":"FlaxVisionEncoderDecoderModel"}],"title":"Vision Encoder Decoder Models"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":true,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","isExpanded":true,"id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"model_doc/vision-encoder-decoder","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Vision Encoder Decoder Models"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Vision Encoder Decoder Models</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/align">ALIGN </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/altclip">AltCLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blip">BLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/blip-2">BLIP-2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/bridgetower">BridgeTower </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/chinese_clip">Chinese-CLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/clip">CLIP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/clipseg">CLIPSeg </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/data2vec">Data2Vec </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/deplot">DePlot </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/donut">Donut </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/flava">FLAVA </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/git">GIT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/groupvit">GroupViT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutlm">LayoutLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutlmv2">LayoutLMV2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutlmv3">LayoutLMV3 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/layoutxlm">LayoutXLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/lilt">LiLT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/lxmert">LXMERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/matcha">MatCha </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mgp-str">MGP-STR </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/oneformer">OneFormer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/owlvit">OWL-ViT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/perceiver">Perceiver </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/pix2struct">Pix2Struct </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/sam">Segment Anything </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/speech-encoder-decoder">Speech Encoder Decoder Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/tapas">TAPAS </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/trocr">TrOCR </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/tvlt">TVLT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/vilt">ViLT </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/model_doc/vision-encoder-decoder">Vision Encoder Decoder Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/vision-text-dual-encoder">Vision Text Dual Encoder </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/visual_bert">VisualBERT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xclip">X-CLIP </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="vision-encoder-decoder-models" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#vision-encoder-decoder-models"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Vision Encoder Decoder Models</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Overview</span></h2> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> can be used to initialize an image-to-text model with any
pretrained Transformer-based vision model as the encoder (<em>e.g.</em> <a href="vit">ViT</a>, <a href="beit">BEiT</a>, <a href="deit">DeiT</a>, <a href="swin">Swin</a>)
and any pretrained language model as the decoder (<em>e.g.</em> <a href="roberta">RoBERTa</a>, <a href="gpt2">GPT2</a>, <a href="bert">BERT</a>, <a href="distilbert">DistilBERT</a>).</p> <p>The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for
example) <a href="https://arxiv.org/abs/2109.10282" rel="nofollow">TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models</a> by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang,
Zhoujun Li, Furu Wei.</p> <p>After such a <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below
for more information).</p> <p>An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates
the caption. Another example is optical character recognition. Refer to <a href="trocr">TrOCR</a>, which is an instance of <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a>.</p> <h2 class="relative group"><a id="randomly-initializing-visionencoderdecodermodel-from-model-configurations" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#randomly-initializing-visionencoderdecodermodel-from-model-configurations"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Randomly initializing <code>VisionEncoderDecoderModel</code> from model configurations.</span></h2> <p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default <a href="/docs/transformers/v4.30.0/en/model_doc/vit#transformers.ViTModel">ViTModel</a> configuration for the encoder
and the default <code>BertForCausalLM</code> configuration for the decoder.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel
<span class="hljs-meta">>>> </span>config_encoder = ViTConfig()
<span class="hljs-meta">>>> </span>config_decoder = BertConfig()
<span class="hljs-meta">>>> </span>config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
<span class="hljs-meta">>>> </span>model = VisionEncoderDecoderModel(config=config)</pre></div> <h2 class="relative group"><a id="initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Initialising <code>VisionEncoderDecoderModel</code> from a pretrained encoder and a pretrained decoder.</span></h2> <p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, <em>e.g.</em> <a href="swin">Swin</a>, can serve as the encoder and both pretrained auto-encoding models, <em>e.g.</em> BERT, pretrained causal language models, <em>e.g.</em> GPT2, as well as the pretrained decoder part of sequence-to-sequence models, <em>e.g.</em> decoder of BART, can be used as the decoder.
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in <a href="https://huggingface.co/blog/warm-starting-encoder-decoder" rel="nofollow">the <em>Warm-starting-encoder-decoder blog post</em></a>.
To do so, the <code>VisionEncoderDecoderModel</code> class provides a <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained">VisionEncoderDecoderModel.from_encoder_decoder_pretrained()</a> method.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VisionEncoderDecoderModel
<span class="hljs-meta">>>> </span>model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"microsoft/swin-base-patch4-window7-224-in22k"</span>, <span class="hljs-string">"bert-base-uncased"</span>
<span class="hljs-meta">... </span>)</pre></div> <h2 class="relative group"><a id="loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Loading an existing <code>VisionEncoderDecoderModel</code> checkpoint and perform inference.</span></h2> <p>To load fine-tuned checkpoints of the <code>VisionEncoderDecoderModel</code> class, <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> provides the <code>from_pretrained(...)</code> method just like any other model architecture in Transformers.</p> <p>To perform inference, one uses the <code>generate</code> method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load a fine-tuned image captioning model and corresponding tokenizer and image processor</span>
<span class="hljs-meta">>>> </span>model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"nlpconnect/vit-gpt2-image-captioning"</span>)
<span class="hljs-meta">>>> </span>tokenizer = GPT2TokenizerFast.from_pretrained(<span class="hljs-string">"nlpconnect/vit-gpt2-image-captioning"</span>)
<span class="hljs-meta">>>> </span>image_processor = ViTImageProcessor.from_pretrained(<span class="hljs-string">"nlpconnect/vit-gpt2-image-captioning"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># let's perform inference on an image</span>
<span class="hljs-meta">>>> </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>
<span class="hljs-meta">>>> </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw)
<span class="hljs-meta">>>> </span>pixel_values = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>).pixel_values
<span class="hljs-meta">>>> </span><span class="hljs-comment"># autoregressively generate caption (uses greedy decoding by default)</span>
<span class="hljs-meta">>>> </span>generated_ids = model.generate(pixel_values)
<span class="hljs-meta">>>> </span>generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>]
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(generated_text)
a cat laying on a blanket <span class="hljs-built_in">next</span> to a cat laying on a bed</pre></div> <h2 class="relative group"><a id="loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Loading a PyTorch checkpoint into <code>TFVisionEncoderDecoderModel</code>.</span></h2> <p><code>TFVisionEncoderDecoderModel.from_pretrained()</code> currently doesn’t support initializing the model from a
PyTorch checkpoint. Passing <code>from_pt=True</code> to this method will throw an exception. If there are only PyTorch
checkpoints for a particular vision encoder-decoder model, a workaround is:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VisionEncoderDecoderModel, TFVisionEncoderDecoderModel
<span class="hljs-meta">>>> </span>_model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"nlpconnect/vit-gpt2-image-captioning"</span>)
<span class="hljs-meta">>>> </span>_model.encoder.save_pretrained(<span class="hljs-string">"./encoder"</span>)
<span class="hljs-meta">>>> </span>_model.decoder.save_pretrained(<span class="hljs-string">"./decoder"</span>)
<span class="hljs-meta">>>> </span>model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"./encoder"</span>, <span class="hljs-string">"./decoder"</span>, encoder_from_pt=<span class="hljs-literal">True</span>, decoder_from_pt=<span class="hljs-literal">True</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># This is only for copying some specific attributes of this particular model.</span>
<span class="hljs-meta">>>> </span>model.config = _model.config</pre></div> <h2 class="relative group"><a id="training" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#training"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Training</span></h2> <p>Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs.
As you can see, only 2 inputs are required for the model in order to compute a loss: <code>pixel_values</code> (which are the
images) and <code>labels</code> (which are the <code>input_ids</code> of the encoded target sequence).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span>image_processor = ViTImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>)
<span class="hljs-meta">>>> </span>tokenizer = BertTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)
<span class="hljs-meta">>>> </span>model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"bert-base-uncased"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>model.config.decoder_start_token_id = tokenizer.cls_token_id
<span class="hljs-meta">>>> </span>model.config.pad_token_id = tokenizer.pad_token_id
<span class="hljs-meta">>>> </span>dataset = load_dataset(<span class="hljs-string">"huggingface/cats-image"</span>)
<span class="hljs-meta">>>> </span>image = dataset[<span class="hljs-string">"test"</span>][<span class="hljs-string">"image"</span>][<span class="hljs-number">0</span>]
<span class="hljs-meta">>>> </span>pixel_values = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>).pixel_values
<span class="hljs-meta">>>> </span>labels = tokenizer(
<span class="hljs-meta">... </span> <span class="hljs-string">"an image of two cats chilling on a couch"</span>,
<span class="hljs-meta">... </span> return_tensors=<span class="hljs-string">"pt"</span>,
<span class="hljs-meta">... </span>).input_ids
<span class="hljs-meta">>>> </span><span class="hljs-comment"># the forward function automatically creates the correct decoder_input_ids</span>
<span class="hljs-meta">>>> </span>loss = model(pixel_values=pixel_values, labels=labels).loss</pre></div> <p>This model was contributed by <a href="https://github.com/nielsrogge" rel="nofollow">nielsr</a>. This model’s TensorFlow and Flax versions
were contributed by <a href="https://github.com/ydshieh" rel="nofollow">ydshieh</a>.</p> <h2 class="relative group"><a id="transformers.VisionEncoderDecoderConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>VisionEncoderDecoderConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionEncoderDecoderConfig</span></span></h3> <a id="transformers.VisionEncoderDecoderConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L34" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderConfig.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderConfig.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) —
Dictionary of keyword arguments. Notably:<p></p>
<ul>
<li><strong>encoder</strong> (<a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a>, <em>optional</em>) — An instance of a configuration object that defines
the encoder config.</li>
<li><strong>decoder</strong> (<a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a>, <em>optional</em>) — An instance of a configuration object that defines
the decoder config.</li>
</ul></span></span> </li></ul> </div></div> <p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a> is the configuration class to store the configuration of a
<a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a>. It is used to instantiate a Vision-Encoder-Text-Decoder model according to the
specified arguments, defining the encoder and decoder configs.</p> <p>Configuration objects inherit from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the
documentation from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.VisionEncoderDecoderConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a ViT & BERT style configuration</span>
<span class="hljs-meta">>>> </span>config_encoder = ViTConfig()
<span class="hljs-meta">>>> </span>config_decoder = BertConfig()
<span class="hljs-meta">>>> </span>config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a ViTBert model (with random weights) from a ViT & bert-base-uncased style configurations</span>
<span class="hljs-meta">>>> </span>model = VisionEncoderDecoderModel(config=config)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Accessing the model configuration</span>
<span class="hljs-meta">>>> </span>config_encoder = model.config.encoder
<span class="hljs-meta">>>> </span>config_decoder = model.config.decoder
<span class="hljs-meta">>>> </span><span class="hljs-comment"># set decoder config to causal lm</span>
<span class="hljs-meta">>>> </span>config_decoder.is_decoder = <span class="hljs-literal">True</span>
<span class="hljs-meta">>>> </span>config_decoder.add_cross_attention = <span class="hljs-literal">True</span>
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Saving the model, including its configuration</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"my-model"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># loading model and config from pretrained folder</span>
<span class="hljs-meta">>>> </span>encoder_decoder_config = VisionEncoderDecoderConfig.from_pretrained(<span class="hljs-string">"my-model"</span>)
<span class="hljs-meta">>>> </span>model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"my-model"</span>, config=encoder_decoder_config)</pre></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderConfig.from_encoder_decoder_configs"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_configs</span></h4> <a id="transformers.VisionEncoderDecoderConfig.from_encoder_decoder_configs" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderConfig.from_encoder_decoder_configs"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L100" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.VisionEncoderDecoderConfig.from_encoder_decoder_configs.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>An instance of a configuration object</p>
</p> </div></div> <p>Instantiate a <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a> (or a derived class) from a pre-trained encoder model
configuration and decoder model configuration.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderConfig.to_dict"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>to_dict</span></h4> <a id="transformers.VisionEncoderDecoderConfig.to_dict" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderConfig.to_dict"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L117" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>Dict[str, any]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.VisionEncoderDecoderConfig.to_dict.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><code>Dict[str, any]</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>Dictionary of all the attributes that make up this configuration instance,</p>
</p> </div></div> <p>Serializes this instance to a Python dictionary. Override the default <em>to_dict()</em> from <em>PretrainedConfig</em>.</p></div></div> <h2 class="relative group"><a id="transformers.VisionEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>VisionEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionEncoderDecoderModel</span></span></h3> <a id="transformers.VisionEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L151" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model
as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via
<a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a>
function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream
generative task, like image captioning.</p> <p>The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.</p> <p>Additionally, in <a href="https://arxiv.org/abs/2109.10282" rel="nofollow">TrOCR: Transformer-based Optical Character Recognition with Pre-trained
Models</a> it is shown how leveraging large pretrained vision models for optical
character recognition (OCR) yields a significant performance improvement.</p> <p>After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any
other models (see the examples for more information).</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture with
one of the base vision model classes of the library as encoder and another one as decoder when created with the
:meth<em>~transformers.AutoModel.from_pretrained</em> class method for the encoder and
:meth<em>~transformers.AutoModelForCausalLM.from_pretrained</em> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.VisionEncoderDecoderModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L519" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) —
Pixel values. Pixel values can be obtained using an image processor (e.g. if you use ViT as the encoder,
you should use <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>). See <a href="/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2FeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Indices of decoder input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p>
<p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see
<code>past_key_values</code>).</p>
<p>For training, <code>decoder_input_ids</code> are automatically created by the model by shifting the <code>labels</code> to the
right, replacing -100 by the <code>pad_token_id</code> and prepending them with the <code>decoder_start_token_id</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also
be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>) —
This tuple must consist of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>)
<code>last_hidden_state</code> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) is a tensor
of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the
decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p>
<p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that
don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all
<code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded
representation. This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices
into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Labels for computing the masked language modeling loss for the decoder. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored
(masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) —
If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see
<code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
If set to <code>True</code>, the model will return a <code>~utils.Seq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:<p></p>
<ul>
<li>Without a prefix which will be input as <code>**encoder_kwargs</code> for the encoder forward function.</li>
<li>With a <em>decoder_</em> prefix which will be input as <code>**decoder_kwargs</code> for the decoder forward function.</li>
</ul></span></span> </li></ul> <div id="transformers.VisionEncoderDecoderModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</p>
</li>
<li>
<p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p>
</li>
<li>
<p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape
<code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape
<code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p>
<p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p>
</li>
<li>
<p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
<li>
<p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.</p>
</li>
<li>
<p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p>
</li>
<li>
<p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.VisionEncoderDecoderModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, VisionEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"microsoft/trocr-base-handwritten"</span>)
<span class="hljs-meta">>>> </span>model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"microsoft/trocr-base-handwritten"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load image from the IAM dataset</span>
<span class="hljs-meta">>>> </span>url = <span class="hljs-string">"https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"</span>
<span class="hljs-meta">>>> </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw).convert(<span class="hljs-string">"RGB"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># training</span>
<span class="hljs-meta">>>> </span>model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
<span class="hljs-meta">>>> </span>model.config.pad_token_id = processor.tokenizer.pad_token_id
<span class="hljs-meta">>>> </span>model.config.vocab_size = model.config.decoder.vocab_size
<span class="hljs-meta">>>> </span>pixel_values = processor(image, return_tensors=<span class="hljs-string">"pt"</span>).pixel_values
<span class="hljs-meta">>>> </span>text = <span class="hljs-string">"hello world"</span>
<span class="hljs-meta">>>> </span>labels = processor.tokenizer(text, return_tensors=<span class="hljs-string">"pt"</span>).input_ids
<span class="hljs-meta">>>> </span>outputs = model(pixel_values=pixel_values, labels=labels)
<span class="hljs-meta">>>> </span>loss = outputs.loss
<span class="hljs-meta">>>> </span><span class="hljs-comment"># inference (generation)</span>
<span class="hljs-meta">>>> </span>generated_ids = model.generate(pixel_values)
<span class="hljs-meta">>>> </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>]</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L365" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>) —
Information necessary to initiate the image encoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. An
example is <code>google/vit-base-patch16-224-in21k</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
<li>A path or url to a <em>tensorflow index checkpoint file</em> (e.g, <code>./tf_model/model.ckpt.index</code>). In
this case, <code>from_tf</code> should be set to <code>True</code> and a configuration object should be provided as
<code>config</code> argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) —
Information necessary to initiate the text decoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a
user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
<li>A path or url to a <em>tensorflow index checkpoint file</em> (e.g, <code>./tf_model/model.ckpt.index</code>). In
this case, <code>from_tf</code> should be set to <code>True</code> and a configuration object should be provided as
<code>config</code> argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) —
All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
<code>output_attentions=True</code>).<p></p>
<ul>
<li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li>
<li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li>
<li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li>
</ul>
<p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p>Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.</p> <p>The model is set in evaluation mode by default using <code>model.eval()</code> (Dropout modules are deactivated). To train
the model, you need to first set it back in training mode with <code>model.train()</code>.</p> <div class="relative group rounded-md"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VisionEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># initialize a vit-bert from a pretrained ViT and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized</span>
<span class="hljs-meta">>>> </span>model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"bert-base-uncased"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># saving model after fine-tuning</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"./vit-bert"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load fine-tuned model</span>
<span class="hljs-meta">>>> </span>model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./vit-bert"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFVisionEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>TFVisionEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFVisionEncoderDecoderModel</span></span></h3> <a id="transformers.TFVisionEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L176" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model
as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via
<a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a>
function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream
generative task, like image captioning.</p> <p>The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.</p> <p>Additionally, in <a href="https://arxiv.org/abs/2109.10282" rel="nofollow">TrOCR: Transformer-based Optical Character Recognition with Pre-trained
Models</a> it is shown how leveraging large pretrained vision models for optical
character recognition (OCR) yields a significant performance improvement.</p> <p>After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any
other models (see the examples for more information).</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.TFVisionEncoderDecoderModel">TFVisionEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture
with one of the base vision model classes of the library as encoder and another one of the base model classes as
decoder when created with the <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> class method for the encoder and
<a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionEncoderDecoderModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFVisionEncoderDecoderModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionEncoderDecoderModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L486" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: Optional[Union[Tuple, TFBaseModelOutput]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput">transformers.modeling_tf_outputs.TFSeq2SeqLMOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>np.ndarray</code>, <code>tf.Tensor</code>, <code>List[tf.Tensor]</code> `<code>Dict[str, tf.Tensor]</code> or <code>Dict[str, np.ndarray]</code> and each example must have the shape <code>(batch_size, num_channels, height, width)</code>) —
Pixel values. Pixel values can be obtained using the vision’s model’s image processor. For example, using
<a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2FeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Indices of decoder input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#input-ids">What are input IDs?</a></p>
<p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see
<code>past_key_values</code>).</p>
<p>Provide for sequence to sequence training to the decoder. Indices can be obtained using
<a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for
details.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also
be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(tf.Tensor)</code>, <em>optional</em>) —
This tuple must consist of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>)
<code>last_hidden_state</code> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) is a tensor of hidden-states at the output
of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(tf.Tensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p>
<p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that
don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all
<code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) —
Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded
representation. This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices
into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Labels for computing the masked language modeling loss for the decoder. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored
(masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) —
If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see
<code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
If set to <code>True</code>, the model will return a <code>~utils.Seq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:<p></p>
<ul>
<li>Without a prefix which will be input as <code>**encoder_kwargs</code> for the encoder forward function.</li>
<li>With a <em>decoder_</em> prefix which will be input as <code>**decoder_kwargs</code> for the decoder forward function.</li>
</ul></span></span> </li></ul> <div id="transformers.TFVisionEncoderDecoderModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput">transformers.modeling_tf_outputs.TFSeq2SeqLMOutput</a> or <code>tuple(tf.Tensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput">transformers.modeling_tf_outputs.TFSeq2SeqLMOutput</a> or a tuple of <code>tf.Tensor</code> (if
<code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the
configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Language modeling loss.</p>
</li>
<li>
<p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p>
</li>
<li>
<p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p>
<p>Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see <code>past_key_values</code> input) to speed up sequential decoding.</p>
</li>
<li>
<p><strong>decoder_hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape
<code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>decoder_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
<li>
<p><strong>cross_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.</p>
</li>
<li>
<p><strong>encoder_last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p>
</li>
<li>
<p><strong>encoder_hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape
<code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>encoder_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.TFVisionEncoderDecoderModel">TFVisionEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFVisionEncoderDecoderModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, AutoTokenizer, TFVisionEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>)
<span class="hljs-meta">>>> </span>decoder_tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"gpt2"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># initialize a bert2gpt2 from a pretrained BERT and GPT2 models. Note that the cross-attention layers will be randomly initialized</span>
<span class="hljs-meta">>>> </span>model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"gpt2"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>
<span class="hljs-meta">>>> </span>img = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># forward</span>
<span class="hljs-meta">>>> </span>pixel_values = image_processor(images=img, return_tensors=<span class="hljs-string">"tf"</span>).pixel_values <span class="hljs-comment"># Batch size 1</span>
<span class="hljs-meta">>>> </span>decoder_input_ids = decoder_tokenizer(<span class="hljs-string">"Linda Davis"</span>, return_tensors=<span class="hljs-string">"tf"</span>).input_ids <span class="hljs-comment"># Batch size 1</span>
<span class="hljs-meta">>>> </span>outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># training</span>
<span class="hljs-meta">>>> </span>outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids, labels=decoder_input_ids)
<span class="hljs-meta">>>> </span>loss, logits = outputs.loss, outputs.logits
<span class="hljs-meta">>>> </span><span class="hljs-comment"># save and load from pretrained</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"vit-gpt2"</span>)
<span class="hljs-meta">>>> </span>model = TFVisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"vit-gpt2"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># generation</span>
<span class="hljs-meta">>>> </span>generated = model.generate(pixel_values, decoder_start_token_id=model.config.decoder.bos_token_id)</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L338" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>) —
Information necessary to initiate the encoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. An
example is <code>google/vit-base-patch16-224-in21k</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
<li>A path or url to a <em>pytorch index checkpoint file</em> (e.g, <code>./pt_model/</code>). In this case,
<code>encoder_from_pt</code> should be set to <code>True</code>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>, defaults to <em>None</em>) —
Information necessary to initiate the decoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a
user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
<li>A path or url to a <em>pytorch checkpoint file</em> (e.g, <code>./pt_model/</code>). In this case,
<code>decoder_from_pt</code> should be set to <code>True</code>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) —
All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
<code>output_attentions=True</code>).<p></p>
<ul>
<li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li>
<li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li>
<li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li>
</ul>
<p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p>Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.</p> <div class="relative group rounded-md"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFVisionEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># initialize a vit-bert from a pretrained ViT and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized</span>
<span class="hljs-meta">>>> </span>model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"bert-base-uncased"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># saving model after fine-tuning</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"./vit-bert"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load fine-tuned model</span>
<span class="hljs-meta">>>> </span>model = TFVisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./vit-bert"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxVisionEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>FlaxVisionEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxVisionEncoderDecoderModel</span></span></h3> <a id="transformers.FlaxVisionEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L268" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: VisionEncoderDecoderConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Optional[typing.Tuple] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = <class 'jax.numpy.float32'></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) —
The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and
<code>jax.numpy.bfloat16</code> (on TPUs).<p></p>
<p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given <code>dtype</code>.</p>
<p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.</strong></p>
<p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p>This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model
as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via
<a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a>
function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream
generative task, like image captioning.</p> <p>The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.</p> <p>Additionally, in <a href="https://arxiv.org/abs/2109.10282" rel="nofollow">TrOCR: Transformer-based Optical Character Recognition with Pre-trained
Models</a> it is shown how leveraging large pretrained vision models for optical
character recognition (OCR) yields a significant performance improvement.</p> <p>After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any
other models (see the examples for more information).</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)</p> <p>This model is also a Flax Linen
<a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p><a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.FlaxVisionEncoderDecoderModel">FlaxVisionEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture
with the module (flax.nn.Module) of one of the base vision model classes of the library as encoder module and
another one as decoder module when created with the :meth<em>~transformers.FlaxAutoModel.from_pretrained</em> class method
for the encoder and :meth<em>~transformers.FlaxAutoModelForCausalLM.from_pretrained</em> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionEncoderDecoderModel.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxVisionEncoderDecoderModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionEncoderDecoderModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L598" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: ndarray</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[jax._src.numpy.ndarray.ndarray] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[jax._src.numpy.ndarray.ndarray] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_position_ids<span class="opacity-60">: typing.Optional[jax._src.numpy.ndarray.ndarray] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, num_channels, height, width)</code>) —
Pixel values. Pixel values can be obtained using the vision model’s image processor. For example, using
<a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2FeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Indices of decoder input sequence tokens in the vocabulary.<p></p>
<p>Indices can be obtained using <a href="/docs/transformers/v4.30.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.30.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode">PreTrainedTokenizer.encode()</a> and
<a href="/docs/transformers/v4.30.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p>
<p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) —
Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also
be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_position_ids</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range <code>[0, config.decoder.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
If set to <code>True</code>, the model will return a <code>~utils.FlaxSeq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxVisionEncoderDecoderModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p>
</li>
<li>
<p><strong>past_key_values</strong> (<code>tuple(tuple(jnp.ndarray))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(jnp.ndarray)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape
<code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape
<code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p>
<p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p>
</li>
<li>
<p><strong>decoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape
<code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>decoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
<li>
<p><strong>cross_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.</p>
</li>
<li>
<p><strong>encoder_last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p>
</li>
<li>
<p><strong>encoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape
<code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p>
</li>
<li>
<p><strong>encoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/vision-encoder-decoder#transformers.FlaxVisionEncoderDecoderModel">FlaxVisionEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FlaxVisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>
<span class="hljs-meta">>>> </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw)
<span class="hljs-meta">>>> </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load output tokenizer</span>
<span class="hljs-meta">>>> </span>tokenizer_output = AutoTokenizer.from_pretrained(<span class="hljs-string">"gpt2"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># initialize a vit-gpt2 from pretrained ViT and GPT2 models. Note that the cross-attention layers will be randomly initialized</span>
<span class="hljs-meta">>>> </span>model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"gpt2"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>pixel_values = image_processor(images=image, return_tensors=<span class="hljs-string">"np"</span>).pixel_values
<span class="hljs-meta">>>> </span><span class="hljs-comment"># use GPT2's eos_token as the pad as well as eos token</span>
<span class="hljs-meta">>>> </span>model.config.eos_token_id = model.config.decoder.eos_token_id
<span class="hljs-meta">>>> </span>model.config.pad_token_id = model.config.eos_token_id
<span class="hljs-meta">>>> </span><span class="hljs-comment"># generation</span>
<span class="hljs-meta">>>> </span>sequences = model.generate(pixel_values, num_beams=<span class="hljs-number">4</span>, max_length=<span class="hljs-number">12</span>).sequences
<span class="hljs-meta">>>> </span>captions = tokenizer_output.batch_decode(sequences, skip_special_tokens=<span class="hljs-literal">True</span>)</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L723" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: typing.Union[str, os.PathLike, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: typing.Union[str, os.PathLike, NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>Union[str, os.PathLike]</code>, <em>optional</em>) —
Information necessary to initiate the encoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. An
example is <code>google/vit-base-patch16-224-in21k</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>Union[str, os.PathLike]</code>, <em>optional</em>, defaults to <code>None</code>) —
Information necessary to initiate the decoder. Can be either:<p></p>
<ul>
<li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a
user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li>
<li>A path to a <em>directory</em> containing model weights saved using
<a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.FlaxPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li>
</ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) —
All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
<code>output_attentions=True</code>).<p></p>
<ul>
<li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li>
<li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li>
<li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li>
</ul>
<p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p>Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.</p> <div class="relative group rounded-md"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FlaxVisionEncoderDecoderModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># initialize a vit-gpt2 from a pretrained ViT and a pretrained GPT2 model. Note that the cross-attention layers will be randomly initialized</span>
<span class="hljs-meta">>>> </span>model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
<span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"gpt2"</span>
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># saving model after fine-tuning</span>
<span class="hljs-meta">>>> </span>model.save_pretrained(<span class="hljs-string">"./vit-gpt2"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># load fine-tuned model</span>
<span class="hljs-meta">>>> </span>model = FlaxVisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./vit-gpt2"</span>)</pre></div></div></div></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/model_doc/vilt" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>ViLT</a>
<a href="/docs/transformers/model_doc/vision-text-dual-encoder" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Vision Text Dual Encoder<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Vision Encoder Decoder Models","isExpanded":true,"id":"vision-encoder-decoder-models","url":"#vision-encoder-decoder-models","sections":[{"title":"Overview","isExpanded":true,"id":"overview","url":"#overview"},{"title":"Randomly initializing `VisionEncoderDecoderModel` from model configurations.","isExpanded":true,"id":"randomly-initializing-visionencoderdecodermodel-from-model-configurations","url":"#randomly-initializing-visionencoderdecodermodel-from-model-configurations"},{"title":"Initialising `VisionEncoderDecoderModel` from a pretrained encoder and a pretrained decoder.","isExpanded":true,"id":"initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder","url":"#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"},{"title":"Loading an existing `VisionEncoderDecoderModel` checkpoint and perform inference.","isExpanded":true,"id":"loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference","url":"#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference"},{"title":"Loading a PyTorch checkpoint into `TFVisionEncoderDecoderModel`.","isExpanded":true,"id":"loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel","url":"#loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel"},{"title":"Training","isExpanded":true,"id":"training","url":"#training"},{"title":"VisionEncoderDecoderConfig","isExpanded":true,"id":"transformers.VisionEncoderDecoderConfig","url":"#transformers.VisionEncoderDecoderConfig"},{"title":"VisionEncoderDecoderModel","isExpanded":true,"id":"transformers.VisionEncoderDecoderModel","url":"#transformers.VisionEncoderDecoderModel"},{"title":"TFVisionEncoderDecoderModel","isExpanded":true,"id":"transformers.TFVisionEncoderDecoderModel","url":"#transformers.TFVisionEncoderDecoderModel"},{"title":"FlaxVisionEncoderDecoderModel","isExpanded":true,"id":"transformers.FlaxVisionEncoderDecoderModel","url":"#transformers.FlaxVisionEncoderDecoderModel"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#vision-encoder-decoder-models" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-vision-encoder-decoder-models"><wbr>Vision <wbr>Encoder <wbr>Decoder <wbr>Models</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#randomly-initializing-visionencoderdecodermodel-from-model-configurations" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-randomly-initializing-visionencoderdecodermodel-from-model-configurations"><wbr>Randomly initializing `<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model` from model configurations.</a> <a href="#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"><wbr>Initialising `<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model` from a pretrained encoder and a pretrained decoder.</a> <a href="#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference"><wbr>Loading an existing `<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model` checkpoint and perform inference.</a> <a href="#loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel"><wbr>Loading a <wbr>Py<wbr>Torch checkpoint into `TF<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model`.</a> <a href="#training" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-training"><wbr>Training</a> <a href="#transformers.VisionEncoderDecoderConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionEncoderDecoderConfig"><wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Config</a> <a href="#transformers.VisionEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionEncoderDecoderModel"><wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model</a> <a href="#transformers.TFVisionEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFVisionEncoderDecoderModel">TF<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model</a> <a href="#transformers.FlaxVisionEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxVisionEncoderDecoderModel"><wbr>Flax<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/model_doc/vision-encoder-decoder" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/model_doc/vision-encoder-decoder");
}
</script>
<iframe name="__privateStripeMetricsController8340" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fmodel_doc%2Fvision-encoder-decoder&title=Vision%20Encoder%20Decoder%20Models&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:01.376Z |
SEW-D | https://huggingface.co/docs/transformers/model_doc/sew-d | ## [](#overview)Overview
SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
The abstract from the paper is the following:
_This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes._
Tips:
- SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
- SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer](/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer).
This model was contributed by [anton-l](https://huggingface.co/anton-l).
## [](#documentation-resources)Documentation resources
- [Audio classification task guide](../tasks/audio_classification)
- [Automatic speech recognition task guide](../tasks/asr)
## [](#transformers.SEWDConfig)SEWDConfig
### class transformers.SEWDConfig
[](#transformers.SEWDConfig)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/configuration_sew_d.py#L32)
( vocab\_size = 32hidden\_size = 768num\_hidden\_layers = 12num\_attention\_heads = 12intermediate\_size = 3072squeeze\_factor = 2max\_position\_embeddings = 512position\_buckets = 256share\_att\_key = Truerelative\_attention = Truepos\_att\_type = ('p2c', 'c2p')norm\_rel\_ebd = 'layer\_norm'hidden\_act = 'gelu\_python'hidden\_dropout = 0.1activation\_dropout = 0.1attention\_dropout = 0.1feat\_proj\_dropout = 0.0final\_dropout = 0.1initializer\_range = 0.02layer\_norm\_eps = 1e-07feature\_layer\_norm\_eps = 1e-05feat\_extract\_norm = 'group'feat\_extract\_activation = 'gelu'conv\_dim = (64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)conv\_stride = (5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)conv\_kernel = (10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)conv\_bias = Falsenum\_conv\_pos\_embeddings = 128num\_conv\_pos\_embedding\_groups = 16apply\_spec\_augment = Truemask\_time\_prob = 0.05mask\_time\_length = 10mask\_time\_min\_masks = 2mask\_feature\_prob = 0.0mask\_feature\_length = 10mask\_feature\_min\_masks = 0ctc\_loss\_reduction = 'mean'ctc\_zero\_infinity = Falseuse\_weighted\_layer\_sum = Falseclassifier\_proj\_size = 256pad\_token\_id = 0bos\_token\_id = 1eos\_token\_id = 2\*\*kwargs )
This is the configuration class to store the configuration of a [SEWDModel](/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDModel). It is used to instantiate a SEW-D model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SEW-D [asapp/sew-d-tiny-100k](https://huggingface.co/asapp/sew-d-tiny-100k) architecture.
Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information.
Example:
```
>>> from transformers import SEWDConfig, SEWDModel
>>>
>>> configuration = SEWDConfig()
>>>
>>> model = SEWDModel(configuration)
>>>
>>> configuration = model.config```
## [](#transformers.SEWDModel)SEWDModel
### class transformers.SEWDModel
[](#transformers.SEWDModel)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1382)
( config: SEWDConfig )
Parameters
- [](#transformers.SEWDModel.config)**config** ([SEWDConfig](/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
The bare SEW-D Model transformer outputting raw hidden-states without any specific head on top. SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.SEWDModel.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1449)
( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Nonemask\_time\_indices: typing.Optional\[torch.FloatTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput) or `tuple(torch.FloatTensor)`
The [SEWDModel](/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDModel) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
```
>>> from transformers import AutoProcessor, SEWDModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> processor = AutoProcessor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
>>> model = SEWDModel.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
>>>
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 292, 384]```
## [](#transformers.SEWDForCTC)SEWDForCTC
### class transformers.SEWDForCTC
[](#transformers.SEWDForCTC)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1511)
( configtarget\_lang = None )
Parameters
- [](#transformers.SEWDForCTC.config)**config** ([SEWDConfig](/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
SEW-D Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.SEWDForCTC.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1559)
( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.CausalLMOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or `tuple(torch.FloatTensor)`
The [SEWDForCTC](/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDForCTC) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
```
>>> from transformers import AutoProcessor, SEWDForCTC
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> processor = AutoProcessor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
>>> model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
>>>
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>>
>>> transcription = processor.batch_decode(predicted_ids)
>>> transcription[0]
'MISTER QUILTER IS THE APOSTIL OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
>>> inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
>>>
>>> loss = model(**inputs).loss
>>> round(loss.item(), 2)
0.21```
## [](#transformers.SEWDForSequenceClassification)SEWDForSequenceClassification
### class transformers.SEWDForSequenceClassification
[](#transformers.SEWDForSequenceClassification)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1647)
( config )
Parameters
- [](#transformers.SEWDForSequenceClassification.config)**config** ([SEWDConfig](/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.
SEWD Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.
SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
This model inherits from [PreTrainedModel](/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
[](#transformers.SEWDForSequenceClassification.forward)[< source \>](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1692)
( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)`
The [SEWDForSequenceClassification](/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDForSequenceClassification) forward method, overrides the `__call__` special method.
Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
```
>>> from transformers import AutoFeatureExtractor, SEWDForSequenceClassification
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("anton-l/sew-d-mid-400k-ft-keyword-spotting")
>>> model = SEWDForSequenceClassification.from_pretrained("anton-l/sew-d-mid-400k-ft-keyword-spotting")
>>>
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_ids = torch.argmax(logits, dim=-1).item()
>>> predicted_label = model.config.id2label[predicted_class_ids]
>>> predicted_label
'_unknown_'
>>>
>>> target_label = model.config.id2label[0]
>>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
>>> loss = model(**inputs).loss
>>> round(loss.item(), 2)
3.16``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="SEW-D">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/model_doc/sew-d">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>SEW-D</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"sewd","sections":[{"local":"overview","title":"Overview"},{"local":"documentation-resources","title":"Documentation resources"},{"local":"transformers.SEWDConfig","title":"SEWDConfig"},{"local":"transformers.SEWDModel","title":"SEWDModel"},{"local":"transformers.SEWDForCTC","title":"SEWDForCTC"},{"local":"transformers.SEWDForSequenceClassification","title":"SEWDForSequenceClassification"}],"title":"SEW-D"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":true,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","isExpanded":true,"id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"model_doc/sew-d","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"SEW-D"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">SEW-D</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/audio-spectrogram-transformer">Audio Spectrogram Transformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/clap">CLAP </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/hubert">Hubert </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mctct">MCTCT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/mms">MMS </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/sew">SEW </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/model_doc/sew-d">SEW-D </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/speech_to_text">Speech2Text </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/speech_to_text_2">Speech2Text2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/speecht5">SpeechT5 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/unispeech">UniSpeech </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/unispeech-sat">UniSpeech-SAT </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/wav2vec2">Wav2Vec2 </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/wav2vec2-conformer">Wav2Vec2-Conformer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/wav2vec2_phoneme">Wav2Vec2Phoneme </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/wavlm">WavLM </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/whisper">Whisper </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xls_r">XLS-R </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/model_doc/xlsr_wav2vec2">XLSR-Wav2Vec2 </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="sewd" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#sewd"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>SEW-D</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Overview</span></h2> <p>SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in <a href="https://arxiv.org/abs/2109.06870" rel="nofollow">Performance-Efficiency Trade-offs
in Unsupervised Pre-training for Speech Recognition</a> by Felix Wu, Kwangyoun Kim,
Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.</p> <p>The abstract from the paper is the following:</p> <p><em>This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
time, SEW reduces word error rate by 25-50% across different model sizes.</em></p> <p>Tips:</p> <ul><li>SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.</li> <li>SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer">Wav2Vec2CTCTokenizer</a>.</li></ul> <p>This model was contributed by <a href="https://huggingface.co/anton-l" rel="nofollow">anton-l</a>.</p> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Documentation resources</span></h2> <ul><li><a href="../tasks/audio_classification">Audio classification task guide</a></li> <li><a href="../tasks/asr">Automatic speech recognition task guide</a></li></ul> <h2 class="relative group"><a id="transformers.SEWDConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>SEWDConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SEWDConfig</span></span></h3> <a id="transformers.SEWDConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/configuration_sew_d.py#L32" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 32</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">squeeze_factor<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_buckets<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">share_att_key<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">relative_attention<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pos_att_type<span class="opacity-60"> = ('p2c', 'c2p')</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">norm_rel_ebd<span class="opacity-60"> = 'layer_norm'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu_python'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_proj_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">final_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-07</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feature_layer_norm_eps<span class="opacity-60"> = 1e-05</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_norm<span class="opacity-60"> = 'group'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_activation<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_dim<span class="opacity-60"> = (64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_stride<span class="opacity-60"> = (5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_kernel<span class="opacity-60"> = (10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_bias<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embeddings<span class="opacity-60"> = 128</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embedding_groups<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">apply_spec_augment<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_prob<span class="opacity-60"> = 0.05</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_min_masks<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_prob<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_min_masks<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_loss_reduction<span class="opacity-60"> = 'mean'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_zero_infinity<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_weighted_layer_sum<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">classifier_proj_size<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 40 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 32) —
Vocabulary size of the SEW-D model. Defines the number of different tokens that can be represented by the
<code>inputs_ids</code> passed when calling <code>SEWD</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) —
Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.squeeze_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.squeeze_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>squeeze_factor</strong> (<code>int</code>, <em>optional</em>, defaults to 2) —
Sequence length downsampling factor after the encoder and upsampling factor after the transformer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.position_buckets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.position_buckets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_buckets</strong> (<code>int</code>, <em>optional</em>, defaults to 256) —
The maximum size of relative position embeddings.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.share_att_key" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.share_att_key"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>share_att_key</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether to share attention key with c2p and p2c.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.relative_attention" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.relative_attention"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>relative_attention</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether to use relative position encoding.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.pos_att_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.pos_att_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pos_att_type</strong> (<code>Tuple[str]</code>, <em>optional</em>, defaults to <code>("p2c", "c2p")</code>) —
The type of relative position attention, it can be a combination of <code>("p2c", "c2p")</code>, e.g. <code>("p2c")</code>,
<code>("p2c", "c2p")</code>, <code>("p2c", "c2p")</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.norm_rel_ebd" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.norm_rel_ebd"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>norm_rel_ebd</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"layer_norm"</code>) —
Whether to use layer norm in relative embedding (<code>"layer_norm"</code> if yes)</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu_python"</code>) —
The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>,
<code>"relu"</code>, <code>"selu"</code>, <code>"gelu_python"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.hidden_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.hidden_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) —
The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.final_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.final_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>final_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) —
The dropout probability for the final projection layer of <a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDForCTC">SEWDForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-7) —
The epsilon used by the layer normalization layers in the transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.feature_layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.feature_layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feature_layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-5) —
The epsilon used by the layer normalization after the feature encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.feat_extract_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.feat_extract_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_extract_norm</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"group"</code>) —
The norm to be applied to 1D convolutional layers in feature encoder. One of <code>"group"</code> for group
normalization of only the first 1D convolutional layer or <code>"layer"</code> for layer normalization of all 1D
convolutional layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.feat_proj_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.feat_proj_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_proj_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) —
The dropout probability for output of the feature encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.feat_extract_activation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.feat_extract_activation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_extract_activation</strong> (<code>str, </code>optional<code>, defaults to </code>“gelu”<code>) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, </code>“gelu”<code>, </code>“relu”<code>, </code>“selu”<code>and</code>“gelu_new”` are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.conv_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.conv_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_dim</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)</code>) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of <em>conv_dim</em> defines the number of 1D convolutional layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.conv_stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.conv_stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_stride</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)</code>) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of <em>conv_stride</em> defines the number of convolutional layers and has to match the length of <em>conv_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.conv_kernel" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.conv_kernel"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_kernel</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)</code>) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of <em>conv_kernel</em> defines the number of convolutional layers and has to match the length of
<em>conv_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.conv_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.conv_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Whether the 1D convolutional layers have a bias.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.num_conv_pos_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.num_conv_pos_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_conv_pos_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.num_conv_pos_embedding_groups" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.num_conv_pos_embedding_groups"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_conv_pos_embedding_groups</strong> (<code>int</code>, <em>optional</em>, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.apply_spec_augment" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.apply_spec_augment"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>apply_spec_augment</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) —
Whether to apply <em>SpecAugment</em> data augmentation to the outputs of the feature encoder. For reference see
<a href="https://arxiv.org/abs/1904.08779" rel="nofollow">SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_time_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_time_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_prob<em>len(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, </em>mask_time_prob<em> should be `prob_vector_start</em>mask_time_length<code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if </code>apply_spec_augment is True`.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_time_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_time_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) —
Length of vector span along the time axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_time_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_time_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 2), —
The minimum number of masks of length <code>mask_feature_length</code> generated along the time axis, each time step,
irrespectively of <code>mask_feature_prob</code>. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_feature_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_feature_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_prob<em>len(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, </em>mask_feature_prob<em> should be `prob_vector_start</em>mask_feature_length<code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if </code>apply_spec_augment is
True`.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_feature_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_feature_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) —
Length of vector span along the feature axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_feature_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_feature_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 0), —
The minimum number of masks of length <code>mask_feature_length</code> generated along the feature axis, each time
step, irrespectively of <code>mask_feature_prob</code>. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.diversity_loss_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.diversity_loss_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>diversity_loss_weight</strong> (<code>int</code>, <em>optional</em>, defaults to 0.1) —
The weight of the codebook diversity loss component.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.ctc_loss_reduction" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.ctc_loss_reduction"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ctc_loss_reduction</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"sum"</code>) —
Specifies the reduction to apply to the output of <code>torch.nn.CTCLoss</code>. Only relevant when training an
instance of <a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDForCTC">SEWDForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.ctc_zero_infinity" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.ctc_zero_infinity"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ctc_zero_infinity</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Whether to zero infinite losses and the associated gradients of <code>torch.nn.CTCLoss</code>. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of <a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDForCTC">SEWDForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.use_weighted_layer_sum" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.use_weighted_layer_sum"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_weighted_layer_sum</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification">Wav2Vec2ForSequenceClassification</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.classifier_proj_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.classifier_proj_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>classifier_proj_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.</span></span> </li></ul> </div></div> <p>This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDModel">SEWDModel</a>. It is used to instantiate a SEW-D
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the SEW-D
<a href="https://huggingface.co/asapp/sew-d-tiny-100k" rel="nofollow">asapp/sew-d-tiny-100k</a> architecture.</p> <p>Configuration objects inherit from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the
documentation from <a href="/docs/transformers/v4.30.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.SEWDConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SEWDConfig, SEWDModel
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a SEW-D asapp/sew-d-tiny-100k style configuration</span>
<span class="hljs-meta">>>> </span>configuration = SEWDConfig()
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Initializing a model (with random weights) from the asapp/sew-d-tiny-100k style configuration</span>
<span class="hljs-meta">>>> </span>model = SEWDModel(configuration)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># Accessing the model configuration</span>
<span class="hljs-meta">>>> </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.SEWDModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>SEWDModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SEWDModel</span></span></h3> <a id="transformers.SEWDModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1382" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: SEWDConfig</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>The bare SEW-D Model transformer outputting raw hidden-states without any specific head on top.
SEW-D was proposed in <a href="https://arxiv.org/abs/2109.06870" rel="nofollow">Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition</a> by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).</p> <p>This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.SEWDModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1449" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file
into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and
conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.SEWDModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p>
</li>
<li>
<p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p>
</li>
<li>
<p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDModel">SEWDModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.SEWDModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, SEWDModel
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>)
<span class="hljs-meta">>>> </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>)
<span class="hljs-meta">>>> </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate
<span class="hljs-meta">>>> </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"asapp/sew-d-tiny-100k-ft-ls100h"</span>)
<span class="hljs-meta">>>> </span>model = SEWDModel.from_pretrained(<span class="hljs-string">"asapp/sew-d-tiny-100k-ft-ls100h"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># audio file is decoded on the fly</span>
<span class="hljs-meta">>>> </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> outputs = model(**inputs)
<span class="hljs-meta">>>> </span>last_hidden_states = outputs.last_hidden_state
<span class="hljs-meta">>>> </span><span class="hljs-built_in">list</span>(last_hidden_states.shape)
[<span class="hljs-number">1</span>, <span class="hljs-number">292</span>, <span class="hljs-number">384</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.SEWDForCTC" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>SEWDForCTC</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDForCTC"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SEWDForCTC</span></span></h3> <a id="transformers.SEWDForCTC" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDForCTC"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1511" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_lang<span class="opacity-60"> = None</span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>SEW-D Model with a <code>language modeling</code> head on top for Connectionist Temporal Classification (CTC).
SEW-D was proposed in <a href="https://arxiv.org/abs/2109.06870" rel="nofollow">Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition</a> by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).</p> <p>This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDForCTC.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.SEWDForCTC.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDForCTC.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1559" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file
into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and
conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_length)</code>, <em>optional</em>) —
Labels for connectionist temporal classification. Note that <code>target_length</code> has to be smaller or equal to
the sequence length of the output logits. Indices are selected in <code>[-100, 0, ..., config.vocab_size - 1]</code>.
All labels set to <code>-100</code> are ignored (masked), the loss is only computed for labels in <code>[0, ..., config.vocab_size - 1]</code>.</span></span> </li></ul> <div id="transformers.SEWDForCTC.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p>
</li>
<li>
<p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p>
</li>
<li>
<p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p>
</li>
<li>
<p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDForCTC">SEWDForCTC</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.SEWDForCTC.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, SEWDForCTC
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>)
<span class="hljs-meta">>>> </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>)
<span class="hljs-meta">>>> </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate
<span class="hljs-meta">>>> </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"asapp/sew-d-tiny-100k-ft-ls100h"</span>)
<span class="hljs-meta">>>> </span>model = SEWDForCTC.from_pretrained(<span class="hljs-string">"asapp/sew-d-tiny-100k-ft-ls100h"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># audio file is decoded on the fly</span>
<span class="hljs-meta">>>> </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> logits = model(**inputs).logits
<span class="hljs-meta">>>> </span>predicted_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># transcribe speech</span>
<span class="hljs-meta">>>> </span>transcription = processor.batch_decode(predicted_ids)
<span class="hljs-meta">>>> </span>transcription[<span class="hljs-number">0</span>]
<span class="hljs-string">'MISTER QUILTER IS THE APOSTIL OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'</span>
<span class="hljs-meta">>>> </span>inputs[<span class="hljs-string">"labels"</span>] = processor(text=dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_ids
<span class="hljs-meta">>>> </span><span class="hljs-comment"># compute loss</span>
<span class="hljs-meta">>>> </span>loss = model(**inputs).loss
<span class="hljs-meta">>>> </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>)
<span class="hljs-number">0.21</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.SEWDForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>SEWDForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SEWDForSequenceClassification</span></span></h3> <a id="transformers.SEWDForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1647" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span>)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p>SEWD Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB
Keyword Spotting.</p> <p>SEW-D was proposed in <a href="https://arxiv.org/abs/2109.06870" rel="nofollow">Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition</a> by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.</p> <p>This model inherits from <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).</p> <p>This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.SEWDForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/sew_d/modeling_sew_d.py#L1692" target="_blank"><span><</span> <span class="hidden md:block mx-0.5 hover:!underline">source</span> <span>></span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span>)</span> <span class="font-bold">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) —
Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file
into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and
conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.30.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p>
<ul>
<li>1 for tokens that are <strong>not masked</strong>,</li>
<li>0 for tokens that are <strong>masked</strong>.</li>
</ul>
<p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned
tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for
more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) —
Whether or not to return a <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) —
Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If
<code>config.num_labels > 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.SEWDForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p>
<p><a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p>
<span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base">
<p>A <a href="/docs/transformers/v4.30.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of
<code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various
elements depending on the configuration (<a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) and inputs.</p>
<ul>
<li>
<p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p>
</li>
<li>
<p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p>
</li>
<li>
<p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p>
<p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p>
</li>
<li>
<p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p>
<p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.</p>
</li>
</ul>
</p> </div></div> <p>The <a href="/docs/transformers/v4.30.0/en/model_doc/sew-d#transformers.SEWDForSequenceClassification">SEWDForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code>
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.SEWDForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p>Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, SEWDForSequenceClassification
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>)
<span class="hljs-meta">>>> </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>)
<span class="hljs-meta">>>> </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate
<span class="hljs-meta">>>> </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"anton-l/sew-d-mid-400k-ft-keyword-spotting"</span>)
<span class="hljs-meta">>>> </span>model = SEWDForSequenceClassification.from_pretrained(<span class="hljs-string">"anton-l/sew-d-mid-400k-ft-keyword-spotting"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-comment"># audio file is decoded on the fly</span>
<span class="hljs-meta">>>> </span>inputs = feature_extractor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> logits = model(**inputs).logits
<span class="hljs-meta">>>> </span>predicted_class_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>).item()
<span class="hljs-meta">>>> </span>predicted_label = model.config.id2label[predicted_class_ids]
<span class="hljs-meta">>>> </span>predicted_label
<span class="hljs-string">'_unknown_'</span>
<span class="hljs-meta">>>> </span><span class="hljs-comment"># compute loss - target_label is e.g. "down"</span>
<span class="hljs-meta">>>> </span>target_label = model.config.id2label[<span class="hljs-number">0</span>]
<span class="hljs-meta">>>> </span>inputs[<span class="hljs-string">"labels"</span>] = torch.tensor([model.config.label2id[target_label]])
<span class="hljs-meta">>>> </span>loss = model(**inputs).loss
<span class="hljs-meta">>>> </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>)
<span class="hljs-number">3.16</span></pre></div></div></div></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/model_doc/sew" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>SEW</a>
<a href="/docs/transformers/model_doc/speech_to_text" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Speech2Text<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"SEW-D","isExpanded":true,"id":"sewd","url":"#sewd","sections":[{"title":"Overview","isExpanded":true,"id":"overview","url":"#overview"},{"title":"Documentation resources","isExpanded":true,"id":"documentation-resources","url":"#documentation-resources"},{"title":"SEWDConfig","isExpanded":true,"id":"transformers.SEWDConfig","url":"#transformers.SEWDConfig"},{"title":"SEWDModel","isExpanded":true,"id":"transformers.SEWDModel","url":"#transformers.SEWDModel"},{"title":"SEWDForCTC","isExpanded":true,"id":"transformers.SEWDForCTC","url":"#transformers.SEWDForCTC"},{"title":"SEWDForSequenceClassification","isExpanded":true,"id":"transformers.SEWDForSequenceClassification","url":"#transformers.SEWDForSequenceClassification"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#sewd" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-sewd">SE<wbr>W-D</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><wbr>Documentation resources</a> <a href="#transformers.SEWDConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SEWDConfig">SEWD<wbr>Config</a> <a href="#transformers.SEWDModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SEWDModel">SEWD<wbr>Model</a> <a href="#transformers.SEWDForCTC" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SEWDForCTC">SEWD<wbr>ForCTC</a> <a href="#transformers.SEWDForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SEWDForSequenceClassification">SEWD<wbr>For<wbr>Sequence<wbr>Classification</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/model_doc/sew-d" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/model_doc/sew-d");
}
</script>
<iframe name="__privateStripeMetricsController7680" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fmodel_doc%2Fsew-d&title=SEW-D&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:01.695Z |
https://huggingface.co/docs/transformers/model_doc/data2vec-audio | The documentation page MODEL\_DOC/DATA2VEC-AUDIO doesn’t exist in v4.30.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/data2vec-audio) to redirect to the main version of the documentation. | <html><head></head><body>The documentation page MODEL_DOC/DATA2VEC-AUDIO doesn’t exist in v4.30.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/data2vec-audio">here</a> to redirect to the main version of the documentation.</body></html> | 2023-06-27T19:55:01.729Z |
|
https://huggingface.co/docs/transformers/model_doc/sequence_classification.mdx | The documentation page MODEL\_DOC/SEQUENCE\_CLASSIFICATION.MDX doesn’t exist in v4.30.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/sequence_classification.mdx) to redirect to the main version of the documentation. | <html><head></head><body>The documentation page MODEL_DOC/SEQUENCE_CLASSIFICATION.MDX doesn’t exist in v4.30.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/sequence_classification.mdx">here</a> to redirect to the main version of the documentation.</body></html> | 2023-06-27T19:55:01.737Z |
|
Translation | https://huggingface.co/docs/transformers/tasks/translation | Translation converts a sequence of text from one language to another. It is one of several tasks you can formulate as a sequence-to-sequence problem, a powerful framework for returning some output from an input, like translation or summarization. Translation systems are commonly used for translation between different language texts, but it can also be used for speech or some combination in between like text-to-speech or speech-to-text.
This guide will show you how to:
1. Finetune [T5](https://huggingface.co/t5-small) on the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset to translate English text to French.
2. Use your finetuned model for inference.
The task illustrated in this tutorial is supported by the following model architectures:
[BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [XLM-ProphetNet](../model_doc/xlm-prophetnet)
Before you begin, make sure you have all the necessary libraries installed:
```
pip install transformers datasets evaluate sacrebleu```
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
```
>>> from huggingface_hub import notebook_login
>>> notebook_login()```
## [](#load-opus-books-dataset)Load OPUS Books dataset
Start by loading the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset from the 🤗 Datasets library:
```
>>> from datasets import load_dataset
>>> books = load_dataset("opus_books", "en-fr")```
Split the dataset into a train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.train_test_split) method:
```
>>> books = books["train"].train_test_split(test_size=0.2)```
Then take a look at an example:
```
>>> books["train"][0]
{'id': '90560',
'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.',
'fr': 'Mais ce plateau élevé ne mesurait que quelques toises, et bientôt nous fûmes rentrés dans notre élément.'}}```
`translation`: an English and French translation of the text.
## [](#preprocess)Preprocess
The next step is to load a T5 tokenizer to process the English-French language pairs:
```
>>> from transformers import AutoTokenizer
>>> checkpoint = "t5-small"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)```
The preprocessing function you want to create needs to:
1. Prefix the input with a prompt so T5 knows this is a translation task. Some models capable of multiple NLP tasks require prompting for specific tasks.
2. Tokenize the input (English) and target (French) separately because you can’t tokenize French text with a tokenizer pretrained on an English vocabulary.
3. Truncate sequences to be no longer than the maximum length set by the `max_length` parameter.
```
>>> source_lang = "en"
>>> target_lang = "fr"
>>> prefix = "translate English to French: "
>>> def preprocess_function(examples):
... inputs = [prefix + example[source_lang] for example in examples["translation"]]
... targets = [example[target_lang] for example in examples["translation"]]
... model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True)
... return model_inputs```
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once:
```
>>> tokenized_books = books.map(preprocess_function, batched=True)```
Now create a batch of examples using [DataCollatorForSeq2Seq](/docs/transformers/v4.30.0/en/main_classes/data_collator#transformers.DataCollatorForSeq2Seq). It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
```
>>> from transformers import DataCollatorForSeq2Seq
>>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)```
```
>>> from transformers import DataCollatorForSeq2Seq
>>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf")```
## [](#evaluate)Evaluate
Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric):
```
>>> import evaluate
>>> metric = evaluate.load("sacrebleu")```
Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the SacreBLEU score:
```
>>> import numpy as np
>>> def postprocess_text(preds, labels):
... preds = [pred.strip() for pred in preds]
... labels = [[label.strip()] for label in labels]
... return preds, labels
>>> def compute_metrics(eval_preds):
... preds, labels = eval_preds
... if isinstance(preds, tuple):
... preds = preds[0]
... decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
... labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
... decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
... result = metric.compute(predictions=decoded_preds, references=decoded_labels)
... result = {"bleu": result["score"]}
... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
... result["gen_len"] = np.mean(prediction_lens)
... result = {k: round(v, 4) for k, v in result.items()}
... return result```
Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training.
## [](#train)Train
If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!
You’re ready to start training your model now! Load T5 with [AutoModelForSeq2SeqLM](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModelForSeq2SeqLM):
```
>>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
>>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)```
At this point, only three steps remain:
1. Define your training hyperparameters in [Seq2SeqTrainingArguments](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Seq2SeqTrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer) will evaluate the SacreBLEU metric and save the training checkpoint.
2. Pass the training arguments to [Seq2SeqTrainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Seq2SeqTrainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function.
3. Call [train()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model.
```
>>> training_args = Seq2SeqTrainingArguments(
... output_dir="my_awesome_opus_books_model",
... evaluation_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... weight_decay=0.01,
... save_total_limit=3,
... num_train_epochs=2,
... predict_with_generate=True,
... fp16=True,
... push_to_hub=True,
... )
>>> trainer = Seq2SeqTrainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_books["train"],
... eval_dataset=tokenized_books["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()```
Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model:
```
>>> trainer.push_to_hub()```
If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
```
>>> from transformers import AdamWeightDecay
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)```
Then you can load T5 with [TFAutoModelForSeq2SeqLM](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.TFAutoModelForSeq2SeqLM):
```
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)```
Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset):
```
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_books["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_test_set = model.prepare_tf_dataset(
... tokenized_books["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )```
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:
```
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer) ```
The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [KerasMetricCallback](/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback):
```
>>> from transformers.keras_callbacks import KerasMetricCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)```
Specify where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback):
```
>>> from transformers.keras_callbacks import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_opus_books_model",
... tokenizer=tokenizer,
... )```
Then bundle your callbacks together:
```
>>> callbacks = [metric_callback, push_to_hub_callback]```
Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:
```
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks)```
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for translation, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb).
## [](#inference)Inference
Great, now that you’ve finetuned a model, you can use it for inference!
Come up with some text you’d like to translate to another language. For T5, you need to prefix your input depending on the task you’re working on. For translation from English to French, you should prefix your input as shown below:
```
>>> text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria."```
The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for translation with your model, and pass your text to it:
```
>>> from transformers import pipeline
>>> translator = pipeline("translation", model="my_awesome_opus_books_model")
>>> translator(text)
[{'translation_text': 'Legumes partagent des ressources avec des bactéries azotantes.'}]```
You can also manually replicate the results of the `pipeline` if you’d like:
Tokenize the text and return the `input_ids` as PyTorch tensors:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model")
>>> inputs = tokenizer(text, return_tensors="pt").input_ids```
Use the [generate()](/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API.
```
>>> from transformers import AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model")
>>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95)```
Decode the generated token ids back into text:
```
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'Les lignées partagent des ressources avec des bactéries enfixant l'azote.'```
Tokenize the text and return the `input_ids` as TensorFlow tensors:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model")
>>> inputs = tokenizer(text, return_tensors="tf").input_ids```
Use the [generate()](/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.TFGenerationMixin.generate) method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API.
```
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model")
>>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95)```
Decode the generated token ids back into text:
```
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'Les lugumes partagent les ressources avec des bactéries fixatrices d'azote.'``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Translation">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/tasks/translation">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Translation</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"translation","sections":[{"local":"load-opus-books-dataset","title":"Load OPUS Books dataset"},{"local":"preprocess","title":"Preprocess"},{"local":"evaluate","title":"Evaluate"},{"local":"train","title":"Train"},{"local":"inference","title":"Inference"}],"title":"Translation"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":true,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","isExpanded":true,"id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"tasks/translation","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Translation"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Translation</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/sequence_classification">Text classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/token_classification">Token classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/question_answering">Question answering </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/language_modeling">Causal language modeling </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/masked_language_modeling">Masked language modeling </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/tasks/translation">Translation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/summarization">Summarization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="translation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#translation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Translation</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/1JvfrvZgi6c" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p>Translation converts a sequence of text from one language to another. It is one of several tasks you can formulate as a sequence-to-sequence problem, a powerful framework for returning some output from an input, like translation or summarization. Translation systems are commonly used for translation between different language texts, but it can also be used for speech or some combination in between like text-to-speech or speech-to-text.</p> <p>This guide will show you how to:</p> <ol><li>Finetune <a href="https://huggingface.co/t5-small" rel="nofollow">T5</a> on the English-French subset of the <a href="https://huggingface.co/datasets/opus_books" rel="nofollow">OPUS Books</a> dataset to translate English text to French.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures:
<p><a href="../model_doc/bart">BART</a>, <a href="../model_doc/bigbird_pegasus">BigBird-Pegasus</a>, <a href="../model_doc/blenderbot">Blenderbot</a>, <a href="../model_doc/blenderbot-small">BlenderbotSmall</a>, <a href="../model_doc/encoder-decoder">Encoder decoder</a>, <a href="../model_doc/fsmt">FairSeq Machine-Translation</a>, <a href="../model_doc/gptsan-japanese">GPTSAN-japanese</a>, <a href="../model_doc/led">LED</a>, <a href="../model_doc/longt5">LongT5</a>, <a href="../model_doc/m2m_100">M2M100</a>, <a href="../model_doc/marian">Marian</a>, <a href="../model_doc/mbart">mBART</a>, <a href="../model_doc/mt5">MT5</a>, <a href="../model_doc/mvp">MVP</a>, <a href="../model_doc/nllb">NLLB</a>, <a href="../model_doc/nllb-moe">NLLB-MOE</a>, <a href="../model_doc/pegasus">Pegasus</a>, <a href="../model_doc/pegasus_x">PEGASUS-X</a>, <a href="../model_doc/plbart">PLBart</a>, <a href="../model_doc/prophetnet">ProphetNet</a>, <a href="../model_doc/switch_transformers">SwitchTransformers</a>, <a href="../model_doc/t5">T5</a>, <a href="../model_doc/xlm-prophetnet">XLM-ProphetNet</a></p></div> <p>Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install transformers datasets evaluate sacrebleu</pre></div> <p>We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login
<span class="hljs-meta">>>> </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-opus-books-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-opus-books-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load OPUS Books dataset</span></h2> <p>Start by loading the English-French subset of the <a href="https://huggingface.co/datasets/opus_books" rel="nofollow">OPUS Books</a> dataset from the 🤗 Datasets library:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span>books = load_dataset(<span class="hljs-string">"opus_books"</span>, <span class="hljs-string">"en-fr"</span>)</pre></div> <p>Split the dataset into a train and test set with the <a href="https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>books = books[<span class="hljs-string">"train"</span>].train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p>Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>books[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>]
{<span class="hljs-string">'id'</span>: <span class="hljs-string">'90560'</span>,
<span class="hljs-string">'translation'</span>: {<span class="hljs-string">'en'</span>: <span class="hljs-string">'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.'</span>,
<span class="hljs-string">'fr'</span>: <span class="hljs-string">'Mais ce plateau élevé ne mesurait que quelques toises, et bientôt nous fûmes rentrés dans notre élément.'</span>}}</pre></div> <p><code>translation</code>: an English and French translation of the text.</p> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Preprocess</span></h2> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/XAR8jnZZuUs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p>The next step is to load a T5 tokenizer to process the English-French language pairs:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer
<span class="hljs-meta">>>> </span>checkpoint = <span class="hljs-string">"t5-small"</span>
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint)</pre></div> <p>The preprocessing function you want to create needs to:</p> <ol><li>Prefix the input with a prompt so T5 knows this is a translation task. Some models capable of multiple NLP tasks require prompting for specific tasks.</li> <li>Tokenize the input (English) and target (French) separately because you can’t tokenize French text with a tokenizer pretrained on an English vocabulary.</li> <li>Truncate sequences to be no longer than the maximum length set by the <code>max_length</code> parameter.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>source_lang = <span class="hljs-string">"en"</span>
<span class="hljs-meta">>>> </span>target_lang = <span class="hljs-string">"fr"</span>
<span class="hljs-meta">>>> </span>prefix = <span class="hljs-string">"translate English to French: "</span>
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>):
<span class="hljs-meta">... </span> inputs = [prefix + example[source_lang] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"translation"</span>]]
<span class="hljs-meta">... </span> targets = [example[target_lang] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"translation"</span>]]
<span class="hljs-meta">... </span> model_inputs = tokenizer(inputs, text_target=targets, max_length=<span class="hljs-number">128</span>, truncation=<span class="hljs-literal">True</span>)
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> model_inputs</pre></div> <p>To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> method. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>tokenized_books = books.<span class="hljs-built_in">map</span>(preprocess_function, batched=<span class="hljs-literal">True</span>)</pre></div> <p>Now create a batch of examples using <a href="/docs/transformers/v4.30.0/en/main_classes/data_collator#transformers.DataCollatorForSeq2Seq">DataCollatorForSeq2Seq</a>. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForSeq2Seq
<span class="hljs-meta">>>> </span>data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForSeq2Seq
<span class="hljs-meta">>>> </span>data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Evaluate</span></h2> <p>Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/sacrebleu" rel="nofollow">SacreBLEU</a> metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> evaluate
<span class="hljs-meta">>>> </span>metric = evaluate.load(<span class="hljs-string">"sacrebleu"</span>)</pre></div> <p>Then create a function that passes your predictions and labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the SacreBLEU score:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">postprocess_text</span>(<span class="hljs-params">preds, labels</span>):
<span class="hljs-meta">... </span> preds = [pred.strip() <span class="hljs-keyword">for</span> pred <span class="hljs-keyword">in</span> preds]
<span class="hljs-meta">... </span> labels = [[label.strip()] <span class="hljs-keyword">for</span> label <span class="hljs-keyword">in</span> labels]
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> preds, labels
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_preds</span>):
<span class="hljs-meta">... </span> preds, labels = eval_preds
<span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-built_in">isinstance</span>(preds, <span class="hljs-built_in">tuple</span>):
<span class="hljs-meta">... </span> preds = preds[<span class="hljs-number">0</span>]
<span class="hljs-meta">... </span> decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=<span class="hljs-literal">True</span>)
<span class="hljs-meta">... </span> labels = np.where(labels != -<span class="hljs-number">100</span>, labels, tokenizer.pad_token_id)
<span class="hljs-meta">... </span> decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=<span class="hljs-literal">True</span>)
<span class="hljs-meta">... </span> decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
<span class="hljs-meta">... </span> result = metric.compute(predictions=decoded_preds, references=decoded_labels)
<span class="hljs-meta">... </span> result = {<span class="hljs-string">"bleu"</span>: result[<span class="hljs-string">"score"</span>]}
<span class="hljs-meta">... </span> prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) <span class="hljs-keyword">for</span> pred <span class="hljs-keyword">in</span> preds]
<span class="hljs-meta">... </span> result[<span class="hljs-string">"gen_len"</span>] = np.mean(prediction_lens)
<span class="hljs-meta">... </span> result = {k: <span class="hljs-built_in">round</span>(v, <span class="hljs-number">4</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> result.items()}
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> result</pre></div> <p>Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p>You’re ready to start training your model now! Load T5 with <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModelForSeq2SeqLM">AutoModelForSeq2SeqLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
<span class="hljs-meta">>>> </span>model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)</pre></div> <p>At this point, only three steps remain:</p> <ol><li>Define your training hyperparameters in <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Seq2SeqTrainingArguments">Seq2SeqTrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the SacreBLEU metric and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Seq2SeqTrainer">Seq2SeqTrainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>training_args = Seq2SeqTrainingArguments(
<span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_opus_books_model"</span>,
<span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>,
<span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">2e-5</span>,
<span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">16</span>,
<span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">16</span>,
<span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>,
<span class="hljs-meta">... </span> save_total_limit=<span class="hljs-number">3</span>,
<span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">2</span>,
<span class="hljs-meta">... </span> predict_with_generate=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span> fp16=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>trainer = Seq2SeqTrainer(
<span class="hljs-meta">... </span> model=model,
<span class="hljs-meta">... </span> args=training_args,
<span class="hljs-meta">... </span> train_dataset=tokenized_books[<span class="hljs-string">"train"</span>],
<span class="hljs-meta">... </span> eval_dataset=tokenized_books[<span class="hljs-string">"test"</span>],
<span class="hljs-meta">... </span> tokenizer=tokenizer,
<span class="hljs-meta">... </span> data_collator=data_collator,
<span class="hljs-meta">... </span> compute_metrics=compute_metrics,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>trainer.train()</pre></div> <p>Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div>
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
<div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AdamWeightDecay
<span class="hljs-meta">>>> </span>optimizer = AdamWeightDecay(learning_rate=<span class="hljs-number">2e-5</span>, weight_decay_rate=<span class="hljs-number">0.01</span>)</pre></div> <p>Then you can load T5 with <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.TFAutoModelForSeq2SeqLM">TFAutoModelForSeq2SeqLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSeq2SeqLM
<span class="hljs-meta">>>> </span>model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)</pre></div> <p>Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.30.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>tf_train_set = model.prepare_tf_dataset(
<span class="hljs-meta">... </span> tokenized_books[<span class="hljs-string">"train"</span>],
<span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>,
<span class="hljs-meta">... </span> collate_fn=data_collator,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>tf_test_set = model.prepare_tf_dataset(
<span class="hljs-meta">... </span> tokenized_books[<span class="hljs-string">"test"</span>],
<span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>,
<span class="hljs-meta">... </span> collate_fn=data_collator,
<span class="hljs-meta">... </span>)</pre></div> <p>Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
<span class="hljs-meta">>>> </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p>The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using <a href="../main_classes/keras_callbacks">Keras callbacks</a>.</p> <p>Pass your <code>compute_metrics</code> function to <a href="/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback">KerasMetricCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> KerasMetricCallback
<span class="hljs-meta">>>> </span>metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)</pre></div> <p>Specify where to push your model and tokenizer in the <a href="/docs/transformers/v4.30.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback
<span class="hljs-meta">>>> </span>push_to_hub_callback = PushToHubCallback(
<span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_opus_books_model"</span>,
<span class="hljs-meta">... </span> tokenizer=tokenizer,
<span class="hljs-meta">... </span>)</pre></div> <p>Then bundle your callbacks together:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>callbacks = [metric_callback, push_to_hub_callback]</pre></div> <p>Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=<span class="hljs-number">3</span>, callbacks=callbacks)</pre></div> <p>Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>For a more in-depth example of how to finetune a model for translation, take a look at the corresponding
<a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb" rel="nofollow">PyTorch notebook</a>
or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Inference</span></h2> <p>Great, now that you’ve finetuned a model, you can use it for inference!</p> <p>Come up with some text you’d like to translate to another language. For T5, you need to prefix your input depending on the task you’re working on. For translation from English to French, you should prefix your input as shown below:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>text = <span class="hljs-string">"translate English to French: Legumes share resources with nitrogen-fixing bacteria."</span></pre></div> <p>The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for translation with your model, and pass your text to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline
<span class="hljs-meta">>>> </span>translator = pipeline(<span class="hljs-string">"translation"</span>, model=<span class="hljs-string">"my_awesome_opus_books_model"</span>)
<span class="hljs-meta">>>> </span>translator(text)
[{<span class="hljs-string">'translation_text'</span>: <span class="hljs-string">'Legumes partagent des ressources avec des bactéries azotantes.'</span>}]</pre></div> <p>You can also manually replicate the results of the <code>pipeline</code> if you’d like:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p>Tokenize the text and return the <code>input_ids</code> as PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_opus_books_model"</span>)
<span class="hljs-meta">>>> </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"pt"</span>).input_ids</pre></div> <p>Use the <a href="/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationMixin.generate">generate()</a> method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the <a href="../main_classes/text_generation">Text Generation</a> API.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSeq2SeqLM
<span class="hljs-meta">>>> </span>model = AutoModelForSeq2SeqLM.from_pretrained(<span class="hljs-string">"my_awesome_opus_books_model"</span>)
<span class="hljs-meta">>>> </span>outputs = model.generate(inputs, max_new_tokens=<span class="hljs-number">40</span>, do_sample=<span class="hljs-literal">True</span>, top_k=<span class="hljs-number">30</span>, top_p=<span class="hljs-number">0.95</span>)</pre></div> <p>Decode the generated token ids back into text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>)
<span class="hljs-string">'Les lignées partagent des ressources avec des bactéries enfixant l'</span>azote.<span class="hljs-string">'</span></pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p>Tokenize the text and return the <code>input_ids</code> as TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer
<span class="hljs-meta">>>> </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_opus_books_model"</span>)
<span class="hljs-meta">>>> </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"tf"</span>).input_ids</pre></div> <p>Use the <a href="/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.TFGenerationMixin.generate">generate()</a> method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the <a href="../main_classes/text_generation">Text Generation</a> API.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSeq2SeqLM
<span class="hljs-meta">>>> </span>model = TFAutoModelForSeq2SeqLM.from_pretrained(<span class="hljs-string">"my_awesome_opus_books_model"</span>)
<span class="hljs-meta">>>> </span>outputs = model.generate(inputs, max_new_tokens=<span class="hljs-number">40</span>, do_sample=<span class="hljs-literal">True</span>, top_k=<span class="hljs-number">30</span>, top_p=<span class="hljs-number">0.95</span>)</pre></div> <p>Decode the generated token ids back into text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>)
<span class="hljs-string">'Les lugumes partagent les ressources avec des bactéries fixatrices d'</span>azote.<span class="hljs-string">'</span></pre></div></div></div> </div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/tasks/masked_language_modeling" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Masked language modeling</a>
<a href="/docs/transformers/tasks/summarization" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Summarization<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Translation","isExpanded":true,"id":"translation","url":"#translation","sections":[{"title":"Load OPUS Books dataset","isExpanded":true,"id":"load-opus-books-dataset","url":"#load-opus-books-dataset"},{"title":"Preprocess","isExpanded":true,"id":"preprocess","url":"#preprocess"},{"title":"Evaluate","isExpanded":true,"id":"evaluate","url":"#evaluate"},{"title":"Train","isExpanded":true,"id":"train","url":"#train"},{"title":"Inference","isExpanded":true,"id":"inference","url":"#inference"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#translation" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-translation"><wbr>Translation</a> <a href="#load-opus-books-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-opus-books-dataset"><wbr>Load OPU<wbr>S <wbr>Books dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/tasks/translation" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/tasks/translation");
}
</script>
<iframe name="__privateStripeMetricsController3880" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Ftasks%2Ftranslation&title=Translation&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:02.120Z |
Monocular depth estimation | https://huggingface.co/docs/transformers/tasks/monocular_depth_estimation | Monocular depth estimation is a computer vision task that involves predicting the depth information of a scene from a single image. In other words, it is the process of estimating the distance of objects in a scene from a single camera viewpoint.
Monocular depth estimation has various applications, including 3D reconstruction, augmented reality, autonomous driving, and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects in the scene and the corresponding depth information, which can be affected by factors such as lighting conditions, occlusion, and texture.
The task illustrated in this tutorial is supported by the following model architectures:
[DPT](../model_doc/dpt), [GLPN](../model_doc/glpn)
In this guide you’ll learn how to:
- create a depth estimation pipeline
- run depth estimation inference by hand
Before you begin, make sure you have all the necessary libraries installed:
```
pip install -q transformers```
## [](#depth-estimation-pipeline)Depth estimation pipeline
The simplest way to try out inference with a model supporting depth estimation is to use the corresponding [pipeline()](/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a pipeline from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads):
```
>>> from transformers import pipeline
>>> checkpoint = "vinvino02/glpn-nyu"
>>> depth_estimator = pipeline("depth-estimation", model=checkpoint)```
Next, choose an image to analyze:
```
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image```
![Photo of a busy street](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg)
Pass the image to the pipeline.
```
>>> predictions = depth_estimator(image)```
The pipeline returns a dictionary with two entries. The first one, called `predicted_depth`, is a tensor with the values being the depth expressed in meters for each pixel. The second one, `depth`, is a PIL image that visualizes the depth estimation result.
Let’s take a look at the visualized result:
![Depth estimation visualization](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png)
## [](#depth-estimation-inference-by-hand)Depth estimation inference by hand
Now that you’ve seen how to use the depth estimation pipeline, let’s see how we can replicate the same result by hand.
Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads). Here we’ll use the same checkpoint as before:
```
>>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation
>>> checkpoint = "vinvino02/glpn-nyu"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint)
>>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint)```
Prepare the image input for the model using the `image_processor` that will take care of the necessary image transformations such as resizing and normalization:
```
>>> pixel_values = image_processor(image, return_tensors="pt").pixel_values```
Pass the prepared inputs through the model:
```
>>> import torch
>>> with torch.no_grad():
... outputs = model(pixel_values)
... predicted_depth = outputs.predicted_depth```
Visualize the results:
```
>>> import numpy as np
>>>
>>> prediction = torch.nn.functional.interpolate(
... predicted_depth.unsqueeze(1),
... size=image.size[::-1],
... mode="bicubic",
... align_corners=False,
... ).squeeze()
>>> output = prediction.numpy()
>>> formatted = (output * 255 / np.max(output)).astype("uint8")
>>> depth = Image.fromarray(formatted)
>>> depth```
![Depth estimation visualization](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png) | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Monocular depth estimation">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/tasks/monocular_depth_estimation">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Monocular depth estimation</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"monocular-depth-estimation","sections":[{"local":"depth-estimation-pipeline","title":"Depth estimation pipeline"},{"local":"depth-estimation-inference-by-hand","title":"Depth estimation inference by hand"}],"title":"Monocular depth estimation"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":true,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","isExpanded":true,"id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"tasks/monocular_depth_estimation","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Monocular depth estimation"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Monocular depth estimation</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/image_classification">Image classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/semantic_segmentation">Semantic segmentation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/video_classification">Video classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/object_detection">Object detection </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/zero_shot_object_detection">Zero-shot object detection </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/zero_shot_image_classification">Zero-shot image classification </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="monocular-depth-estimation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#monocular-depth-estimation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Monocular depth estimation</span></h1> <p>Monocular depth estimation is a computer vision task that involves predicting the depth information of a scene from a
single image. In other words, it is the process of estimating the distance of objects in a scene from
a single camera viewpoint.</p> <p>Monocular depth estimation has various applications, including 3D reconstruction, augmented reality, autonomous driving,
and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects
in the scene and the corresponding depth information, which can be affected by factors such as lighting conditions,
occlusion, and texture.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures:
<p><a href="../model_doc/dpt">DPT</a>, <a href="../model_doc/glpn">GLPN</a></p></div> <p>In this guide you’ll learn how to:</p> <ul><li>create a depth estimation pipeline</li> <li>run depth estimation inference by hand</li></ul> <p>Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install -q transformers</pre></div> <h2 class="relative group"><a id="depth-estimation-pipeline" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#depth-estimation-pipeline"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Depth estimation pipeline</span></h2> <p>The simplest way to try out inference with a model supporting depth estimation is to use the corresponding <a href="/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>.
Instantiate a pipeline from a <a href="https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline
<span class="hljs-meta">>>> </span>checkpoint = <span class="hljs-string">"vinvino02/glpn-nyu"</span>
<span class="hljs-meta">>>> </span>depth_estimator = pipeline(<span class="hljs-string">"depth-estimation"</span>, model=checkpoint)</pre></div> <p>Next, choose an image to analyze:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span>url = <span class="hljs-string">"https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640"</span>
<span class="hljs-meta">>>> </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw)
<span class="hljs-meta">>>> </span>image</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg" alt="Photo of a busy street"></div> <p>Pass the image to the pipeline.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>predictions = depth_estimator(image)</pre></div> <p>The pipeline returns a dictionary with two entries. The first one, called <code>predicted_depth</code>, is a tensor with the values
being the depth expressed in meters for each pixel.
The second one, <code>depth</code>, is a PIL image that visualizes the depth estimation result.</p> <p>Let’s take a look at the visualized result:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>predictions[<span class="hljs-string">"depth"</span>]</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"></div> <h2 class="relative group"><a id="depth-estimation-inference-by-hand" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#depth-estimation-inference-by-hand"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Depth estimation inference by hand</span></h2> <p>Now that you’ve seen how to use the depth estimation pipeline, let’s see how we can replicate the same result by hand.</p> <p>Start by loading the model and associated processor from a <a href="https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>.
Here we’ll use the same checkpoint as before:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, AutoModelForDepthEstimation
<span class="hljs-meta">>>> </span>checkpoint = <span class="hljs-string">"vinvino02/glpn-nyu"</span>
<span class="hljs-meta">>>> </span>image_processor = AutoImageProcessor.from_pretrained(checkpoint)
<span class="hljs-meta">>>> </span>model = AutoModelForDepthEstimation.from_pretrained(checkpoint)</pre></div> <p>Prepare the image input for the model using the <code>image_processor</code> that will take care of the necessary image transformations
such as resizing and normalization:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>pixel_values = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>).pixel_values</pre></div> <p>Pass the prepared inputs through the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> outputs = model(pixel_values)
<span class="hljs-meta">... </span> predicted_depth = outputs.predicted_depth</pre></div> <p>Visualize the results:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-meta">>>> </span><span class="hljs-comment"># interpolate to original size</span>
<span class="hljs-meta">>>> </span>prediction = torch.nn.functional.interpolate(
<span class="hljs-meta">... </span> predicted_depth.unsqueeze(<span class="hljs-number">1</span>),
<span class="hljs-meta">... </span> size=image.size[::-<span class="hljs-number">1</span>],
<span class="hljs-meta">... </span> mode=<span class="hljs-string">"bicubic"</span>,
<span class="hljs-meta">... </span> align_corners=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span>).squeeze()
<span class="hljs-meta">>>> </span>output = prediction.numpy()
<span class="hljs-meta">>>> </span>formatted = (output * <span class="hljs-number">255</span> / np.<span class="hljs-built_in">max</span>(output)).astype(<span class="hljs-string">"uint8"</span>)
<span class="hljs-meta">>>> </span>depth = Image.fromarray(formatted)
<span class="hljs-meta">>>> </span>depth</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/tasks/zero_shot_image_classification" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Zero-shot image classification</a>
<a href="/docs/transformers/tasks/image_captioning" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Image captioning<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Monocular depth estimation","isExpanded":true,"id":"monocular-depth-estimation","url":"#monocular-depth-estimation","sections":[{"title":"Depth estimation pipeline","isExpanded":true,"id":"depth-estimation-pipeline","url":"#depth-estimation-pipeline"},{"title":"Depth estimation inference by hand","isExpanded":true,"id":"depth-estimation-inference-by-hand","url":"#depth-estimation-inference-by-hand"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#monocular-depth-estimation" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-monocular-depth-estimation"><wbr>Monocular depth estimation</a> <a href="#depth-estimation-pipeline" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-depth-estimation-pipeline"><wbr>Depth estimation pipeline</a> <a href="#depth-estimation-inference-by-hand" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-depth-estimation-inference-by-hand"><wbr>Depth estimation inference by hand</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/tasks/monocular_depth_estimation" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/tasks/monocular_depth_estimation");
}
</script>
<iframe name="__privateStripeMetricsController1910" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Ftasks%2Fmonocular_depth_estimation&title=Monocular%20depth%20estimation&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:02.326Z |
Document Question Answering | https://huggingface.co/docs/transformers/tasks/document_question_answering | Document Question Answering, also referred to as Document Visual Question Answering, is a task that involves providing answers to questions posed about document images. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. These models utilize multiple modalities, including text, the positions of words (bounding boxes), and the image itself.
This guide illustrates how to:
- Fine-tune [LayoutLMv2](../model_doc/layoutlmv2) on the [DocVQA dataset](https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut).
- Use your fine-tuned model for inference.
The task illustrated in this tutorial is supported by the following model architectures:
[LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3)
LayoutLMv2 solves the document question-answering task by adding a question-answering head on top of the final hidden states of the tokens, to predict the positions of the start and end tokens of the answer. In other words, the problem is treated as extractive question answering: given the context, extract which piece of information answers the question. The context comes from the output of an OCR engine, here it is Google’s Tesseract.
Before you begin, make sure you have all the necessary libraries installed. LayoutLMv2 depends on detectron2, torchvision and tesseract.
```
pip install -q transformers datasets```
```
pip install 'git+https://github.com/facebookresearch/detectron2.git'
pip install torchvision```
```
sudo apt install tesseract-ocr
pip install -q pytesseract```
Once you have installed all of the dependencies, restart your runtime.
We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in:
```
>>> from huggingface_hub import notebook_login
>>> notebook_login()```
Let’s define some global variables.
```
>>> model_checkpoint = "microsoft/layoutlmv2-base-uncased"
>>> batch_size = 4```
## [](#load-the-data)Load the data
In this guide we use a small sample of preprocessed DocVQA that you can find on 🤗 Hub. If you’d like to use the full DocVQA dataset, you can register and download it on [DocVQA homepage](https://rrc.cvc.uab.es/?ch=17). If you do so, to proceed with this guide check out [how to load files into a 🤗 dataset](https://huggingface.co/docs/datasets/loading#local-and-remote-files).
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("nielsr/docvqa_1200_examples")
>>> dataset
DatasetDict({
train: Dataset({
features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'],
num_rows: 1000
})
test: Dataset({
features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'],
num_rows: 200
})
})```
As you can see, the dataset is split into train and test sets already. Take a look at a random example to familiarize yourself with the features.
```
>>> dataset["train"].features```
Here’s what the individual fields represent:
- `id`: the example’s id
- `image`: a PIL.Image.Image object containing the document image
- `query`: the question string - natural language asked question, in several languages
- `answers`: a list of correct answers provided by human annotators
- `words` and `bounding_boxes`: the results of OCR, which we will not use here
- `answer`: an answer matched by a different model which we will not use here
Let’s leave only English questions, and drop the `answer` feature which appears to contain predictions by another model. We’ll also take the first of the answers from the set provided by the annotators. Alternatively, you can randomly sample it.
```
>>> updated_dataset = dataset.map(lambda example: {"question": example["query"]["en"]}, remove_columns=["query"])
>>> updated_dataset = updated_dataset.map(
... lambda example: {"answer": example["answers"][0]}, remove_columns=["answer", "answers"]
... )```
Note that the LayoutLMv2 checkpoint that we use in this guide has been trained with `max_position_embeddings = 512` (you can find this information in the [checkpoint’s `config.json` file](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18)). We can truncate the examples but to avoid the situation where the answer might be at the end of a large document and end up truncated, here we’ll remove the few examples where the embedding is likely to end up longer than 512. If most of the documents in your dataset are long, you can implement a sliding window strategy - check out [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb) for details.
```
>>> updated_dataset = updated_dataset.filter(lambda x: len(x["words"]) + len(x["question"].split()) < 512)```
At this point let’s also remove the OCR features from this dataset. These are a result of OCR for fine-tuning a different model. They would still require some processing if we wanted to use them, as they do not match the input requirements of the model we use in this guide. Instead, we can use the [LayoutLMv2Processor](/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2Processor) on the original data for both OCR and tokenization. This way we’ll get the inputs that match model’s expected input. If you want to process images manually, check out the [`LayoutLMv2` model documentation](../model_doc/layoutlmv2) to learn what input format the model expects.
```
>>> updated_dataset = updated_dataset.remove_columns("words")
>>> updated_dataset = updated_dataset.remove_columns("bounding_boxes")```
Finally, the data exploration won’t be complete if we don’t peek at an image example.
```
>>> updated_dataset["train"][11]["image"]```
![DocVQA Image Example](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.jpg)
## [](#preprocess-the-data)Preprocess the data
The Document Question Answering task is a multimodal task, and you need to make sure that the inputs from each modality are preprocessed according to the model’s expectations. Let’s start by loading the [LayoutLMv2Processor](/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2Processor), which internally combines an image processor that can handle image data and a tokenizer that can encode text data.
```
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)```
### [](#preprocessing-document-images)Preprocessing document images
First, let’s prepare the document images for the model with the help of the `image_processor` from the processor. By default, image processor resizes the images to 224x224, makes sure they have the correct order of color channels, applies OCR with tesseract to get words and normalized bounding boxes. In this tutorial, all of these defaults are exactly what we need. Write a function that applies the default image processing to a batch of images and returns the results of OCR.
```
>>> image_processor = processor.image_processor
>>> def get_ocr_words_and_boxes(examples):
... images = [image.convert("RGB") for image in examples["image"]]
... encoded_inputs = image_processor(images)
... examples["image"] = encoded_inputs.pixel_values
... examples["words"] = encoded_inputs.words
... examples["boxes"] = encoded_inputs.boxes
... return examples```
To apply this preprocessing to the entire dataset in a fast way, use [map](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.map).
```
>>> dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2)```
### [](#preprocessing-text-data)Preprocessing text data
Once we have applied OCR to the images, we need to encode the text part of the dataset to prepare it for the model. This involves converting the words and boxes that we got in the previous step to token-level `input_ids`, `attention_mask`, `token_type_ids` and `bbox`. For preprocessing text, we’ll need the `tokenizer` from the processor.
```
>>> tokenizer = processor.tokenizer```
On top of the preprocessing mentioned above, we also need to add the labels for the model. For `xxxForQuestionAnswering` models in 🤗 Transformers, the labels consist of the `start_positions` and `end_positions`, indicating which token is at the start and which token is at the end of the answer.
Let’s start with that. Define a helper function that can find a sublist (the answer split into words) in a larger list (the words list).
This function will take two lists as input, `words_list` and `answer_list`. It will then iterate over the `words_list` and check if the current word in the `words_list` (words\_list\[i\]) is equal to the first word of answer\_list (answer\_list\[0\]) and if the sublist of `words_list` starting from the current word and of the same length as `answer_list` is equal `to answer_list`. If this condition is true, it means that a match has been found, and the function will record the match, its starting index (idx), and its ending index (idx + len(answer\_list) - 1). If more than one match was found, the function will return only the first one. If no match is found, the function returns (`None`, 0, and 0).
```
>>> def subfinder(words_list, answer_list):
... matches = []
... start_indices = []
... end_indices = []
... for idx, i in enumerate(range(len(words_list))):
... if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list:
... matches.append(answer_list)
... start_indices.append(idx)
... end_indices.append(idx + len(answer_list) - 1)
... if matches:
... return matches[0], start_indices[0], end_indices[0]
... else:
... return None, 0, 0```
To illustrate how this function finds the position of the answer, let’s use it on an example:
```
>>> example = dataset_with_ocr["train"][1]
>>> words = [word.lower() for word in example["words"]]
>>> match, word_idx_start, word_idx_end = subfinder(words, example["answer"].lower().split())
>>> print("Question: ", example["question"])
>>> print("Words:", words)
>>> print("Answer: ", example["answer"])
>>> print("start_index", word_idx_start)
>>> print("end_index", word_idx_end)
Question: Who is in cc in this letter?
Words: ['wie', 'baw', 'brown', '&', 'williamson', 'tobacco', 'corporation', 'research', '&', 'development', 'internal', 'correspondence', 'to:', 'r.', 'h.', 'honeycutt', 'ce:', 't.f.', 'riehl', 'from:', '.', 'c.j.', 'cook', 'date:', 'may', '8,', '1995', 'subject:', 'review', 'of', 'existing', 'brainstorming', 'ideas/483', 'the', 'major', 'function', 'of', 'the', 'product', 'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to', 'manufacture', 'and', 'sell.', 'novel', 'is', 'defined', 'as:', 'of', 'a', 'new', 'kind,', 'or', 'different', 'from', 'anything', 'seen', 'or', 'known', 'before.', 'innovation', 'is', 'defined', 'as:', 'something', 'new', 'or', 'different', 'introduced;', 'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate', 'the', 'latest', 'technologies,', 'materials', 'and', 'know-how', 'available', 'to', 'give', 'then', 'a', 'unique', 'taste', 'or', 'look.', 'the', 'first', 'task', 'of', 'the', 'product', 'innovation', 'group', 'was', 'to', 'assemble,', 'review', 'and', 'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major', 'categories', 'labeled', 'appearance', 'and', 'taste/aroma.', 'these', 'categories', 'are', 'used', 'for', 'novel', 'products', 'that', 'may', 'differ', 'from', 'a', 'visual', 'and/or', 'taste/aroma', 'point', 'of', 'view', 'compared', 'to', 'canventional', 'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and', 'brand', 'extensions.', 'appearance', 'this', 'category', 'is', 'used', 'for', 'novel', 'cigarette', 'constructions', 'that', 'yield', 'visually', 'different', 'products', 'with', 'minimal', 'changes', 'in', 'smoke', 'chemistry', 'two', 'cigarettes', 'in', 'cne.', 'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.', '*cigarettes', 'with', 'interspaced', 'perforations', 'to', 'enable', 'smoker', 'to', 'separate', 'unburned', 'section', 'for', 'future', 'smoking.', '«short', 'cigarette,', 'tobacco', 'section', '30', 'mm.', '«extremely', 'fast', 'buming', 'cigarette.', '«novel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while', 'maintaining', 'smoking', 'mechanics', 'and', 'visual', 'characteristics.', 'higher', 'basis', 'weight', 'paper:', 'potential', 'reduction', 'in', 'tobacco', 'weight.', '«more', 'rigid', 'tobacco', 'column;', 'stiffing', 'agent', 'for', 'tobacco;', 'e.g.', 'starch', '*colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes', 'for', 'easter', 'or', 'in', 'an', 'ebony', 'and', 'ivory', 'brand', 'containing', 'a', 'mixture', 'of', 'all', 'black', '(black', 'paper', 'and', 'tow)', 'and', 'ail', 'white', 'cigarettes.', '499150498']
Answer: T.F. Riehl
start_index 17
end_index 18```
Once examples are encoded, however, they will look like this:
```
>>> encoding = tokenizer(example["question"], example["words"], example["boxes"])
>>> tokenizer.decode(encoding["input_ids"])
[CLS] who is in cc in this letter? [SEP] wie baw brown & williamson tobacco corporation research & development ...```
We’ll need to find the position of the answer in the encoded input.
- `token_type_ids` tells us which tokens are part of the question, and which ones are part of the document’s words.
- `tokenizer.cls_token_id` will help find the special token at the beginning of the input.
- `word_ids` will help match the answer found in the original `words` to the same answer in the full encoded input and determine the start/end position of the answer in the encoded input.
With that in mind, let’s create a function to encode a batch of examples in the dataset:
```
>>> def encode_dataset(examples, max_length=512):
... questions = examples["question"]
... words = examples["words"]
... boxes = examples["boxes"]
... answers = examples["answer"]
...
... encoding = tokenizer(questions, words, boxes, max_length=max_length, padding="max_length", truncation=True)
... start_positions = []
... end_positions = []
...
... for i in range(len(questions)):
... cls_index = encoding["input_ids"][i].index(tokenizer.cls_token_id)
...
... words_example = [word.lower() for word in words[i]]
... answer = answers[i]
... match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split())
... if match:
...
... token_type_ids = encoding["token_type_ids"][i]
... token_start_index = 0
... while token_type_ids[token_start_index] != 1:
... token_start_index += 1
... token_end_index = len(encoding["input_ids"][i]) - 1
... while token_type_ids[token_end_index] != 1:
... token_end_index -= 1
... word_ids = encoding.word_ids(i)[token_start_index : token_end_index + 1]
... start_position = cls_index
... end_position = cls_index
...
...
... for id in word_ids:
... if id == word_idx_start:
... start_position = token_start_index
... else:
... token_start_index += 1
...
... for id in word_ids[::-1]:
... if id == word_idx_end:
... end_position = token_end_index
... else:
... token_end_index -= 1
... start_positions.append(start_position)
... end_positions.append(end_position)
... else:
... start_positions.append(cls_index)
... end_positions.append(cls_index)
... encoding["image"] = examples["image"]
... encoding["start_positions"] = start_positions
... encoding["end_positions"] = end_positions
... return encoding```
Now that we have this preprocessing function, we can encode the entire dataset:
```
>>> encoded_train_dataset = dataset_with_ocr["train"].map(
... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["train"].column_names
... )
>>> encoded_test_dataset = dataset_with_ocr["test"].map(
... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["test"].column_names
... )```
Let’s check what the features of the encoded dataset look like:
```
>>> encoded_train_dataset.features
{'image': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='uint8', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None),
'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'bbox': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None),
'start_positions': Value(dtype='int64', id=None),
'end_positions': Value(dtype='int64', id=None)}```
## [](#evaluation)Evaluation
Evaluation for document question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer) still calculates the evaluation loss during training so you’re not completely in the dark about your model’s performance. Extractive question answering is typically evaluated using F1/exact match. If you’d like to implement it yourself, check out the [Question Answering chapter](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) of the Hugging Face course for inspiration.
## [](#train)Train
Congratulations! You’ve successfully navigated the toughest part of this guide and now you are ready to train your own model. Training involves the following steps:
- Load the model with [AutoModelForDocumentQuestionAnswering](/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModelForDocumentQuestionAnswering) using the same checkpoint as in the preprocessing.
- Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments).
- Define a function to batch examples together, here the [DefaultDataCollator](/docs/transformers/v4.30.0/en/main_classes/data_collator#transformers.DefaultDataCollator) will do just fine
- Pass the training arguments to [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, and data collator.
- Call [train()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model.
```
>>> from transformers import AutoModelForDocumentQuestionAnswering
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint)```
In the [TrainingArguments](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments) use `output_dir` to specify where to save your model, and configure hyperparameters as you see fit. If you wish to share your model with the community, set `push_to_hub` to `True` (you must be signed in to Hugging Face to upload your model). In this case the `output_dir` will also be the name of the repo where your model checkpoint will be pushed.
```
>>> from transformers import TrainingArguments
>>>
>>> repo_id = "MariaK/layoutlmv2-base-uncased_finetuned_docvqa"
>>> training_args = TrainingArguments(
... output_dir=repo_id,
... per_device_train_batch_size=4,
... num_train_epochs=20,
... save_steps=200,
... logging_steps=50,
... evaluation_strategy="steps",
... learning_rate=5e-5,
... save_total_limit=2,
... remove_unused_columns=False,
... push_to_hub=True,
... )```
Define a simple data collator to batch examples together.
```
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()```
Finally, bring everything together, and call [train()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train):
```
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=data_collator,
... train_dataset=encoded_train_dataset,
... eval_dataset=encoded_test_dataset,
... tokenizer=processor,
... )
>>> trainer.train()```
To add the final model to 🤗 Hub, create a model card and call `push_to_hub`:
```
>>> trainer.create_model_card()
>>> trainer.push_to_hub()```
## [](#inference)Inference
Now that you have finetuned a LayoutLMv2 model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your finetuned model for inference is to use it in a [Pipeline](/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.Pipeline).
Let’s take an example:
```
>>> example = dataset["test"][2]
>>> question = example["query"]["en"]
>>> image = example["image"]
>>> print(question)
>>> print(example["answers"])
'Who is ‘presiding’ TRRF GENERAL SESSION (PART 1)?'
['TRRF Vice President', 'lee a. waller']```
Next, instantiate a pipeline for document question answering with your model, and pass the image + question combination to it.
```
>>> from transformers import pipeline
>>> qa_pipeline = pipeline("document-question-answering", model="MariaK/layoutlmv2-base-uncased_finetuned_docvqa")
>>> qa_pipeline(image, question)
[{'score': 0.9949808120727539,
'answer': 'Lee A. Waller',
'start': 55,
'end': 57}]```
You can also manually replicate the results of the pipeline if you’d like:
1. Take an image and a question, prepare them for the model using the processor from your model.
2. Forward the result or preprocessing through the model.
3. The model returns `start_logits` and `end_logits`, which indicate which token is at the start of the answer and which token is at the end of the answer. Both have shape (batch\_size, sequence\_length).
4. Take an argmax on the last dimension of both the `start_logits` and `end_logits` to get the predicted `start_idx` and `end_idx`.
5. Decode the answer with the tokenizer.
```
>>> import torch
>>> from transformers import AutoProcessor
>>> from transformers import AutoModelForDocumentQuestionAnswering
>>> processor = AutoProcessor.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa")
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa")
>>> with torch.no_grad():
... encoding = processor(image.convert("RGB"), question, return_tensors="pt")
... outputs = model(**encoding)
... start_logits = outputs.start_logits
... end_logits = outputs.end_logits
... predicted_start_idx = start_logits.argmax(-1).item()
... predicted_end_idx = end_logits.argmax(-1).item()
>>> processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1])
'lee a. waller'``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Document Question Answering">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/tasks/document_question_answering">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Document Question Answering</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"document-question-answering","sections":[{"local":"load-the-data","title":"Load the data"},{"local":"preprocess-the-data","sections":[{"local":"preprocessing-document-images","title":"Preprocessing document images"},{"local":"preprocessing-text-data","title":"Preprocessing text data"}],"title":"Preprocess the data"},{"local":"evaluation","title":"Evaluation"},{"local":"train","title":"Train"},{"local":"inference","title":"Inference"}],"title":"Document Question Answering"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":true,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","isExpanded":true,"id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"tasks/document_question_answering","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Document Question Answering"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Document Question Answering</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/image_captioning">Image captioning </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/tasks/document_question_answering">Document Question Answering </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/text-to-speech">Text to speech </a> </div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="document-question-answering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#document-question-answering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Document Question Answering</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p>Document Question Answering, also referred to as Document Visual Question Answering, is a task that involves providing
answers to questions posed about document images. The input to models supporting this task is typically a combination of an image and
a question, and the output is an answer expressed in natural language. These models utilize multiple modalities, including
text, the positions of words (bounding boxes), and the image itself.</p> <p>This guide illustrates how to:</p> <ul><li>Fine-tune <a href="../model_doc/layoutlmv2">LayoutLMv2</a> on the <a href="https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut" rel="nofollow">DocVQA dataset</a>.</li> <li>Use your fine-tuned model for inference.</li></ul> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>The task illustrated in this tutorial is supported by the following model architectures:</p> <p><a href="../model_doc/layoutlm">LayoutLM</a>, <a href="../model_doc/layoutlmv2">LayoutLMv2</a>, <a href="../model_doc/layoutlmv3">LayoutLMv3</a></p></div> <p>LayoutLMv2 solves the document question-answering task by adding a question-answering head on top of the final hidden
states of the tokens, to predict the positions of the start and end tokens of the
answer. In other words, the problem is treated as extractive question answering: given the context, extract which piece
of information answers the question. The context comes from the output of an OCR engine, here it is Google’s Tesseract.</p> <p>Before you begin, make sure you have all the necessary libraries installed. LayoutLMv2 depends on detectron2, torchvision and tesseract.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install -q transformers datasets</pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install <span class="hljs-string">'git+https://github.com/facebookresearch/detectron2.git'</span>
pip install torchvision</pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>sudo apt install tesseract-ocr
pip install -q pytesseract</pre></div> <p>Once you have installed all of the dependencies, restart your runtime.</p> <p>We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub.
When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login
<span class="hljs-meta">>>> </span>notebook_login()</pre></div> <p>Let’s define some global variables.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>model_checkpoint = <span class="hljs-string">"microsoft/layoutlmv2-base-uncased"</span>
<span class="hljs-meta">>>> </span>batch_size = <span class="hljs-number">4</span></pre></div> <h2 class="relative group"><a id="load-the-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-the-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load the data</span></h2> <p>In this guide we use a small sample of preprocessed DocVQA that you can find on 🤗 Hub. If you’d like to use the full
DocVQA dataset, you can register and download it on <a href="https://rrc.cvc.uab.es/?ch=17" rel="nofollow">DocVQA homepage</a>. If you do so, to
proceed with this guide check out <a href="https://huggingface.co/docs/datasets/loading#local-and-remote-files" rel="nofollow">how to load files into a 🤗 dataset</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-meta">>>> </span>dataset = load_dataset(<span class="hljs-string">"nielsr/docvqa_1200_examples"</span>)
<span class="hljs-meta">>>> </span>dataset
DatasetDict({
train: Dataset({
features: [<span class="hljs-string">'id'</span>, <span class="hljs-string">'image'</span>, <span class="hljs-string">'query'</span>, <span class="hljs-string">'answers'</span>, <span class="hljs-string">'words'</span>, <span class="hljs-string">'bounding_boxes'</span>, <span class="hljs-string">'answer'</span>],
num_rows: <span class="hljs-number">1000</span>
})
test: Dataset({
features: [<span class="hljs-string">'id'</span>, <span class="hljs-string">'image'</span>, <span class="hljs-string">'query'</span>, <span class="hljs-string">'answers'</span>, <span class="hljs-string">'words'</span>, <span class="hljs-string">'bounding_boxes'</span>, <span class="hljs-string">'answer'</span>],
num_rows: <span class="hljs-number">200</span>
})
})</pre></div> <p>As you can see, the dataset is split into train and test sets already. Take a look at a random example to familiarize
yourself with the features.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>dataset[<span class="hljs-string">"train"</span>].features</pre></div> <p>Here’s what the individual fields represent:</p> <ul><li><code>id</code>: the example’s id</li> <li><code>image</code>: a PIL.Image.Image object containing the document image</li> <li><code>query</code>: the question string - natural language asked question, in several languages</li> <li><code>answers</code>: a list of correct answers provided by human annotators</li> <li><code>words</code> and <code>bounding_boxes</code>: the results of OCR, which we will not use here</li> <li><code>answer</code>: an answer matched by a different model which we will not use here</li></ul> <p>Let’s leave only English questions, and drop the <code>answer</code> feature which appears to contain predictions by another model.
We’ll also take the first of the answers from the set provided by the annotators. Alternatively, you can randomly sample it.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>updated_dataset = dataset.<span class="hljs-built_in">map</span>(<span class="hljs-keyword">lambda</span> example: {<span class="hljs-string">"question"</span>: example[<span class="hljs-string">"query"</span>][<span class="hljs-string">"en"</span>]}, remove_columns=[<span class="hljs-string">"query"</span>])
<span class="hljs-meta">>>> </span>updated_dataset = updated_dataset.<span class="hljs-built_in">map</span>(
<span class="hljs-meta">... </span> <span class="hljs-keyword">lambda</span> example: {<span class="hljs-string">"answer"</span>: example[<span class="hljs-string">"answers"</span>][<span class="hljs-number">0</span>]}, remove_columns=[<span class="hljs-string">"answer"</span>, <span class="hljs-string">"answers"</span>]
<span class="hljs-meta">... </span>)</pre></div> <p>Note that the LayoutLMv2 checkpoint that we use in this guide has been trained with <code>max_position_embeddings = 512</code> (you can
find this information in the <a href="https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18" rel="nofollow">checkpoint’s <code>config.json</code> file</a>).
We can truncate the examples but to avoid the situation where the answer might be at the end of a large document and end up truncated,
here we’ll remove the few examples where the embedding is likely to end up longer than 512.
If most of the documents in your dataset are long, you can implement a sliding window strategy - check out <a href="https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb" rel="nofollow">this notebook</a> for details.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>updated_dataset = updated_dataset.<span class="hljs-built_in">filter</span>(<span class="hljs-keyword">lambda</span> x: <span class="hljs-built_in">len</span>(x[<span class="hljs-string">"words"</span>]) + <span class="hljs-built_in">len</span>(x[<span class="hljs-string">"question"</span>].split()) < <span class="hljs-number">512</span>)</pre></div> <p>At this point let’s also remove the OCR features from this dataset. These are a result of OCR for fine-tuning a different
model. They would still require some processing if we wanted to use them, as they do not match the input requirements
of the model we use in this guide. Instead, we can use the <a href="/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2Processor">LayoutLMv2Processor</a> on the original data for both OCR and
tokenization. This way we’ll get the inputs that match model’s expected input. If you want to process images manually,
check out the <a href="../model_doc/layoutlmv2"><code>LayoutLMv2</code> model documentation</a> to learn what input format the model expects.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>updated_dataset = updated_dataset.remove_columns(<span class="hljs-string">"words"</span>)
<span class="hljs-meta">>>> </span>updated_dataset = updated_dataset.remove_columns(<span class="hljs-string">"bounding_boxes"</span>)</pre></div> <p>Finally, the data exploration won’t be complete if we don’t peek at an image example.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>updated_dataset[<span class="hljs-string">"train"</span>][<span class="hljs-number">11</span>][<span class="hljs-string">"image"</span>]</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.jpg" alt="DocVQA Image Example"></div> <h2 class="relative group"><a id="preprocess-the-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess-the-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Preprocess the data</span></h2> <p>The Document Question Answering task is a multimodal task, and you need to make sure that the inputs from each modality
are preprocessed according to the model’s expectations. Let’s start by loading the <a href="/docs/transformers/v4.30.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2Processor">LayoutLMv2Processor</a>, which internally combines an image processor that can handle image data and a tokenizer that can encode text data.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor
<span class="hljs-meta">>>> </span>processor = AutoProcessor.from_pretrained(model_checkpoint)</pre></div> <h3 class="relative group"><a id="preprocessing-document-images" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocessing-document-images"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Preprocessing document images</span></h3> <p>First, let’s prepare the document images for the model with the help of the <code>image_processor</code> from the processor.
By default, image processor resizes the images to 224x224, makes sure they have the correct order of color channels,
applies OCR with tesseract to get words and normalized bounding boxes. In this tutorial, all of these defaults are exactly what we need.
Write a function that applies the default image processing to a batch of images and returns the results of OCR.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>image_processor = processor.image_processor
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">get_ocr_words_and_boxes</span>(<span class="hljs-params">examples</span>):
<span class="hljs-meta">... </span> images = [image.convert(<span class="hljs-string">"RGB"</span>) <span class="hljs-keyword">for</span> image <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"image"</span>]]
<span class="hljs-meta">... </span> encoded_inputs = image_processor(images)
<span class="hljs-meta">... </span> examples[<span class="hljs-string">"image"</span>] = encoded_inputs.pixel_values
<span class="hljs-meta">... </span> examples[<span class="hljs-string">"words"</span>] = encoded_inputs.words
<span class="hljs-meta">... </span> examples[<span class="hljs-string">"boxes"</span>] = encoded_inputs.boxes
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> examples</pre></div> <p>To apply this preprocessing to the entire dataset in a fast way, use <a href="https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>dataset_with_ocr = updated_dataset.<span class="hljs-built_in">map</span>(get_ocr_words_and_boxes, batched=<span class="hljs-literal">True</span>, batch_size=<span class="hljs-number">2</span>)</pre></div> <h3 class="relative group"><a id="preprocessing-text-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocessing-text-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Preprocessing text data</span></h3> <p>Once we have applied OCR to the images, we need to encode the text part of the dataset to prepare it for the model.
This involves converting the words and boxes that we got in the previous step to token-level <code>input_ids</code>, <code>attention_mask</code>,
<code>token_type_ids</code> and <code>bbox</code>. For preprocessing text, we’ll need the <code>tokenizer</code> from the processor.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>tokenizer = processor.tokenizer</pre></div> <p>On top of the preprocessing mentioned above, we also need to add the labels for the model. For <code>xxxForQuestionAnswering</code> models
in 🤗 Transformers, the labels consist of the <code>start_positions</code> and <code>end_positions</code>, indicating which token is at the
start and which token is at the end of the answer.</p> <p>Let’s start with that. Define a helper function that can find a sublist (the answer split into words) in a larger list (the words list).</p> <p>This function will take two lists as input, <code>words_list</code> and <code>answer_list</code>. It will then iterate over the <code>words_list</code> and check
if the current word in the <code>words_list</code> (words_list[i]) is equal to the first word of answer_list (answer_list[0]) and if
the sublist of <code>words_list</code> starting from the current word and of the same length as <code>answer_list</code> is equal <code>to answer_list</code>.
If this condition is true, it means that a match has been found, and the function will record the match, its starting index (idx),
and its ending index (idx + len(answer_list) - 1). If more than one match was found, the function will return only the first one.
If no match is found, the function returns (<code>None</code>, 0, and 0).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">subfinder</span>(<span class="hljs-params">words_list, answer_list</span>):
<span class="hljs-meta">... </span> matches = []
<span class="hljs-meta">... </span> start_indices = []
<span class="hljs-meta">... </span> end_indices = []
<span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> idx, i <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(<span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(words_list))):
<span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> words_list[i] == answer_list[<span class="hljs-number">0</span>] <span class="hljs-keyword">and</span> words_list[i : i + <span class="hljs-built_in">len</span>(answer_list)] == answer_list:
<span class="hljs-meta">... </span> matches.append(answer_list)
<span class="hljs-meta">... </span> start_indices.append(idx)
<span class="hljs-meta">... </span> end_indices.append(idx + <span class="hljs-built_in">len</span>(answer_list) - <span class="hljs-number">1</span>)
<span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> matches:
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> matches[<span class="hljs-number">0</span>], start_indices[<span class="hljs-number">0</span>], end_indices[<span class="hljs-number">0</span>]
<span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>:
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span></pre></div> <p>To illustrate how this function finds the position of the answer, let’s use it on an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>example = dataset_with_ocr[<span class="hljs-string">"train"</span>][<span class="hljs-number">1</span>]
<span class="hljs-meta">>>> </span>words = [word.lower() <span class="hljs-keyword">for</span> word <span class="hljs-keyword">in</span> example[<span class="hljs-string">"words"</span>]]
<span class="hljs-meta">>>> </span>match, word_idx_start, word_idx_end = subfinder(words, example[<span class="hljs-string">"answer"</span>].lower().split())
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"Question: "</span>, example[<span class="hljs-string">"question"</span>])
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"Words:"</span>, words)
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"Answer: "</span>, example[<span class="hljs-string">"answer"</span>])
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"start_index"</span>, word_idx_start)
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"end_index"</span>, word_idx_end)
Question: Who <span class="hljs-keyword">is</span> <span class="hljs-keyword">in</span> cc <span class="hljs-keyword">in</span> this letter?
Words: [<span class="hljs-string">'wie'</span>, <span class="hljs-string">'baw'</span>, <span class="hljs-string">'brown'</span>, <span class="hljs-string">'&'</span>, <span class="hljs-string">'williamson'</span>, <span class="hljs-string">'tobacco'</span>, <span class="hljs-string">'corporation'</span>, <span class="hljs-string">'research'</span>, <span class="hljs-string">'&'</span>, <span class="hljs-string">'development'</span>, <span class="hljs-string">'internal'</span>, <span class="hljs-string">'correspondence'</span>, <span class="hljs-string">'to:'</span>, <span class="hljs-string">'r.'</span>, <span class="hljs-string">'h.'</span>, <span class="hljs-string">'honeycutt'</span>, <span class="hljs-string">'ce:'</span>, <span class="hljs-string">'t.f.'</span>, <span class="hljs-string">'riehl'</span>, <span class="hljs-string">'from:'</span>, <span class="hljs-string">'.'</span>, <span class="hljs-string">'c.j.'</span>, <span class="hljs-string">'cook'</span>, <span class="hljs-string">'date:'</span>, <span class="hljs-string">'may'</span>, <span class="hljs-string">'8,'</span>, <span class="hljs-string">'1995'</span>, <span class="hljs-string">'subject:'</span>, <span class="hljs-string">'review'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'existing'</span>, <span class="hljs-string">'brainstorming'</span>, <span class="hljs-string">'ideas/483'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'major'</span>, <span class="hljs-string">'function'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'product'</span>, <span class="hljs-string">'innovation'</span>, <span class="hljs-string">'graup'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'develop'</span>, <span class="hljs-string">'marketable'</span>, <span class="hljs-string">'nove!'</span>, <span class="hljs-string">'products'</span>, <span class="hljs-string">'that'</span>, <span class="hljs-string">'would'</span>, <span class="hljs-string">'be'</span>, <span class="hljs-string">'profitable'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'manufacture'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'sell.'</span>, <span class="hljs-string">'novel'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'defined'</span>, <span class="hljs-string">'as:'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'new'</span>, <span class="hljs-string">'kind,'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'different'</span>, <span class="hljs-string">'from'</span>, <span class="hljs-string">'anything'</span>, <span class="hljs-string">'seen'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'known'</span>, <span class="hljs-string">'before.'</span>, <span class="hljs-string">'innovation'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'defined'</span>, <span class="hljs-string">'as:'</span>, <span class="hljs-string">'something'</span>, <span class="hljs-string">'new'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'different'</span>, <span class="hljs-string">'introduced;'</span>, <span class="hljs-string">'act'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'innovating;'</span>, <span class="hljs-string">'introduction'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'new'</span>, <span class="hljs-string">'things'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'methods.'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'products'</span>, <span class="hljs-string">'may'</span>, <span class="hljs-string">'incorporate'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'latest'</span>, <span class="hljs-string">'technologies,'</span>, <span class="hljs-string">'materials'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'know-how'</span>, <span class="hljs-string">'available'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'give'</span>, <span class="hljs-string">'then'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'unique'</span>, <span class="hljs-string">'taste'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'look.'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'first'</span>, <span class="hljs-string">'task'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'product'</span>, <span class="hljs-string">'innovation'</span>, <span class="hljs-string">'group'</span>, <span class="hljs-string">'was'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'assemble,'</span>, <span class="hljs-string">'review'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'categorize'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'list'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'existing'</span>, <span class="hljs-string">'brainstorming'</span>, <span class="hljs-string">'ideas.'</span>, <span class="hljs-string">'ideas'</span>, <span class="hljs-string">'were'</span>, <span class="hljs-string">'grouped'</span>, <span class="hljs-string">'into'</span>, <span class="hljs-string">'two'</span>, <span class="hljs-string">'major'</span>, <span class="hljs-string">'categories'</span>, <span class="hljs-string">'labeled'</span>, <span class="hljs-string">'appearance'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'taste/aroma.'</span>, <span class="hljs-string">'these'</span>, <span class="hljs-string">'categories'</span>, <span class="hljs-string">'are'</span>, <span class="hljs-string">'used'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'novel'</span>, <span class="hljs-string">'products'</span>, <span class="hljs-string">'that'</span>, <span class="hljs-string">'may'</span>, <span class="hljs-string">'differ'</span>, <span class="hljs-string">'from'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'visual'</span>, <span class="hljs-string">'and/or'</span>, <span class="hljs-string">'taste/aroma'</span>, <span class="hljs-string">'point'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'view'</span>, <span class="hljs-string">'compared'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'canventional'</span>, <span class="hljs-string">'cigarettes.'</span>, <span class="hljs-string">'other'</span>, <span class="hljs-string">'categories'</span>, <span class="hljs-string">'include'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'combination'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'above,'</span>, <span class="hljs-string">'filters,'</span>, <span class="hljs-string">'packaging'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'brand'</span>, <span class="hljs-string">'extensions.'</span>, <span class="hljs-string">'appearance'</span>, <span class="hljs-string">'this'</span>, <span class="hljs-string">'category'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'used'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'novel'</span>, <span class="hljs-string">'cigarette'</span>, <span class="hljs-string">'constructions'</span>, <span class="hljs-string">'that'</span>, <span class="hljs-string">'yield'</span>, <span class="hljs-string">'visually'</span>, <span class="hljs-string">'different'</span>, <span class="hljs-string">'products'</span>, <span class="hljs-string">'with'</span>, <span class="hljs-string">'minimal'</span>, <span class="hljs-string">'changes'</span>, <span class="hljs-string">'in'</span>, <span class="hljs-string">'smoke'</span>, <span class="hljs-string">'chemistry'</span>, <span class="hljs-string">'two'</span>, <span class="hljs-string">'cigarettes'</span>, <span class="hljs-string">'in'</span>, <span class="hljs-string">'cne.'</span>, <span class="hljs-string">'emulti-plug'</span>, <span class="hljs-string">'te'</span>, <span class="hljs-string">'build'</span>, <span class="hljs-string">'yaur'</span>, <span class="hljs-string">'awn'</span>, <span class="hljs-string">'cigarette.'</span>, <span class="hljs-string">'eswitchable'</span>, <span class="hljs-string">'menthol'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'non'</span>, <span class="hljs-string">'menthol'</span>, <span class="hljs-string">'cigarette.'</span>, <span class="hljs-string">'*cigarettes'</span>, <span class="hljs-string">'with'</span>, <span class="hljs-string">'interspaced'</span>, <span class="hljs-string">'perforations'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'enable'</span>, <span class="hljs-string">'smoker'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'separate'</span>, <span class="hljs-string">'unburned'</span>, <span class="hljs-string">'section'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'future'</span>, <span class="hljs-string">'smoking.'</span>, <span class="hljs-string">'«short'</span>, <span class="hljs-string">'cigarette,'</span>, <span class="hljs-string">'tobacco'</span>, <span class="hljs-string">'section'</span>, <span class="hljs-string">'30'</span>, <span class="hljs-string">'mm.'</span>, <span class="hljs-string">'«extremely'</span>, <span class="hljs-string">'fast'</span>, <span class="hljs-string">'buming'</span>, <span class="hljs-string">'cigarette.'</span>, <span class="hljs-string">'«novel'</span>, <span class="hljs-string">'cigarette'</span>, <span class="hljs-string">'constructions'</span>, <span class="hljs-string">'that'</span>, <span class="hljs-string">'permit'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'significant'</span>, <span class="hljs-string">'reduction'</span>, <span class="hljs-string">'iretobacco'</span>, <span class="hljs-string">'weight'</span>, <span class="hljs-string">'while'</span>, <span class="hljs-string">'maintaining'</span>, <span class="hljs-string">'smoking'</span>, <span class="hljs-string">'mechanics'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'visual'</span>, <span class="hljs-string">'characteristics.'</span>, <span class="hljs-string">'higher'</span>, <span class="hljs-string">'basis'</span>, <span class="hljs-string">'weight'</span>, <span class="hljs-string">'paper:'</span>, <span class="hljs-string">'potential'</span>, <span class="hljs-string">'reduction'</span>, <span class="hljs-string">'in'</span>, <span class="hljs-string">'tobacco'</span>, <span class="hljs-string">'weight.'</span>, <span class="hljs-string">'«more'</span>, <span class="hljs-string">'rigid'</span>, <span class="hljs-string">'tobacco'</span>, <span class="hljs-string">'column;'</span>, <span class="hljs-string">'stiffing'</span>, <span class="hljs-string">'agent'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'tobacco;'</span>, <span class="hljs-string">'e.g.'</span>, <span class="hljs-string">'starch'</span>, <span class="hljs-string">'*colored'</span>, <span class="hljs-string">'tow'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'cigarette'</span>, <span class="hljs-string">'papers;'</span>, <span class="hljs-string">'seasonal'</span>, <span class="hljs-string">'promotions,'</span>, <span class="hljs-string">'e.g.'</span>, <span class="hljs-string">'pastel'</span>, <span class="hljs-string">'colored'</span>, <span class="hljs-string">'cigarettes'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'easter'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'in'</span>, <span class="hljs-string">'an'</span>, <span class="hljs-string">'ebony'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'ivory'</span>, <span class="hljs-string">'brand'</span>, <span class="hljs-string">'containing'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'mixture'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'all'</span>, <span class="hljs-string">'black'</span>, <span class="hljs-string">'(black'</span>, <span class="hljs-string">'paper'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'tow)'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'ail'</span>, <span class="hljs-string">'white'</span>, <span class="hljs-string">'cigarettes.'</span>, <span class="hljs-string">'499150498'</span>]
Answer: T.F. Riehl
start_index <span class="hljs-number">17</span>
end_index <span class="hljs-number">18</span></pre></div> <p>Once examples are encoded, however, they will look like this:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>encoding = tokenizer(example[<span class="hljs-string">"question"</span>], example[<span class="hljs-string">"words"</span>], example[<span class="hljs-string">"boxes"</span>])
<span class="hljs-meta">>>> </span>tokenizer.decode(encoding[<span class="hljs-string">"input_ids"</span>])
[CLS] who <span class="hljs-keyword">is</span> <span class="hljs-keyword">in</span> cc <span class="hljs-keyword">in</span> this letter? [SEP] wie baw brown & williamson tobacco corporation research & development ...</pre></div> <p>We’ll need to find the position of the answer in the encoded input.</p> <ul><li><code>token_type_ids</code> tells us which tokens are part of the question, and which ones are part of the document’s words.</li> <li><code>tokenizer.cls_token_id</code> will help find the special token at the beginning of the input.</li> <li><code>word_ids</code> will help match the answer found in the original <code>words</code> to the same answer in the full encoded input and determine
the start/end position of the answer in the encoded input.</li></ul> <p>With that in mind, let’s create a function to encode a batch of examples in the dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">encode_dataset</span>(<span class="hljs-params">examples, max_length=<span class="hljs-number">512</span></span>):
<span class="hljs-meta">... </span> questions = examples[<span class="hljs-string">"question"</span>]
<span class="hljs-meta">... </span> words = examples[<span class="hljs-string">"words"</span>]
<span class="hljs-meta">... </span> boxes = examples[<span class="hljs-string">"boxes"</span>]
<span class="hljs-meta">... </span> answers = examples[<span class="hljs-string">"answer"</span>]
<span class="hljs-meta">... </span> <span class="hljs-comment"># encode the batch of examples and initialize the start_positions and end_positions</span>
<span class="hljs-meta">... </span> encoding = tokenizer(questions, words, boxes, max_length=max_length, padding=<span class="hljs-string">"max_length"</span>, truncation=<span class="hljs-literal">True</span>)
<span class="hljs-meta">... </span> start_positions = []
<span class="hljs-meta">... </span> end_positions = []
<span class="hljs-meta">... </span> <span class="hljs-comment"># loop through the examples in the batch</span>
<span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(questions)):
<span class="hljs-meta">... </span> cls_index = encoding[<span class="hljs-string">"input_ids"</span>][i].index(tokenizer.cls_token_id)
<span class="hljs-meta">... </span> <span class="hljs-comment"># find the position of the answer in example's words</span>
<span class="hljs-meta">... </span> words_example = [word.lower() <span class="hljs-keyword">for</span> word <span class="hljs-keyword">in</span> words[i]]
<span class="hljs-meta">... </span> answer = answers[i]
<span class="hljs-meta">... </span> match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split())
<span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> match:
<span class="hljs-meta">... </span> <span class="hljs-comment"># if match is found, use `token_type_ids` to find where words start in the encoding</span>
<span class="hljs-meta">... </span> token_type_ids = encoding[<span class="hljs-string">"token_type_ids"</span>][i]
<span class="hljs-meta">... </span> token_start_index = <span class="hljs-number">0</span>
<span class="hljs-meta">... </span> <span class="hljs-keyword">while</span> token_type_ids[token_start_index] != <span class="hljs-number">1</span>:
<span class="hljs-meta">... </span> token_start_index += <span class="hljs-number">1</span>
<span class="hljs-meta">... </span> token_end_index = <span class="hljs-built_in">len</span>(encoding[<span class="hljs-string">"input_ids"</span>][i]) - <span class="hljs-number">1</span>
<span class="hljs-meta">... </span> <span class="hljs-keyword">while</span> token_type_ids[token_end_index] != <span class="hljs-number">1</span>:
<span class="hljs-meta">... </span> token_end_index -= <span class="hljs-number">1</span>
<span class="hljs-meta">... </span> word_ids = encoding.word_ids(i)[token_start_index : token_end_index + <span class="hljs-number">1</span>]
<span class="hljs-meta">... </span> start_position = cls_index
<span class="hljs-meta">... </span> end_position = cls_index
<span class="hljs-meta">... </span> <span class="hljs-comment"># loop over word_ids and increase `token_start_index` until it matches the answer position in words</span>
<span class="hljs-meta">... </span> <span class="hljs-comment"># once it matches, save the `token_start_index` as the `start_position` of the answer in the encoding</span>
<span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> <span class="hljs-built_in">id</span> <span class="hljs-keyword">in</span> word_ids:
<span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-built_in">id</span> == word_idx_start:
<span class="hljs-meta">... </span> start_position = token_start_index
<span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>:
<span class="hljs-meta">... </span> token_start_index += <span class="hljs-number">1</span>
<span class="hljs-meta">... </span> <span class="hljs-comment"># similarly loop over `word_ids` starting from the end to find the `end_position` of the answer</span>
<span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> <span class="hljs-built_in">id</span> <span class="hljs-keyword">in</span> word_ids[::-<span class="hljs-number">1</span>]:
<span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-built_in">id</span> == word_idx_end:
<span class="hljs-meta">... </span> end_position = token_end_index
<span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>:
<span class="hljs-meta">... </span> token_end_index -= <span class="hljs-number">1</span>
<span class="hljs-meta">... </span> start_positions.append(start_position)
<span class="hljs-meta">... </span> end_positions.append(end_position)
<span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>:
<span class="hljs-meta">... </span> start_positions.append(cls_index)
<span class="hljs-meta">... </span> end_positions.append(cls_index)
<span class="hljs-meta">... </span> encoding[<span class="hljs-string">"image"</span>] = examples[<span class="hljs-string">"image"</span>]
<span class="hljs-meta">... </span> encoding[<span class="hljs-string">"start_positions"</span>] = start_positions
<span class="hljs-meta">... </span> encoding[<span class="hljs-string">"end_positions"</span>] = end_positions
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> encoding</pre></div> <p>Now that we have this preprocessing function, we can encode the entire dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>encoded_train_dataset = dataset_with_ocr[<span class="hljs-string">"train"</span>].<span class="hljs-built_in">map</span>(
<span class="hljs-meta">... </span> encode_dataset, batched=<span class="hljs-literal">True</span>, batch_size=<span class="hljs-number">2</span>, remove_columns=dataset_with_ocr[<span class="hljs-string">"train"</span>].column_names
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>encoded_test_dataset = dataset_with_ocr[<span class="hljs-string">"test"</span>].<span class="hljs-built_in">map</span>(
<span class="hljs-meta">... </span> encode_dataset, batched=<span class="hljs-literal">True</span>, batch_size=<span class="hljs-number">2</span>, remove_columns=dataset_with_ocr[<span class="hljs-string">"test"</span>].column_names
<span class="hljs-meta">... </span>)</pre></div> <p>Let’s check what the features of the encoded dataset look like:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>encoded_train_dataset.features
{<span class="hljs-string">'image'</span>: <span class="hljs-type">Sequence</span>(feature=<span class="hljs-type">Sequence</span>(feature=<span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'uint8'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>),
<span class="hljs-string">'input_ids'</span>: <span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'int32'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>),
<span class="hljs-string">'token_type_ids'</span>: <span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'int8'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>),
<span class="hljs-string">'attention_mask'</span>: <span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'int8'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>),
<span class="hljs-string">'bbox'</span>: <span class="hljs-type">Sequence</span>(feature=<span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'int64'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>),
<span class="hljs-string">'start_positions'</span>: Value(dtype=<span class="hljs-string">'int64'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>),
<span class="hljs-string">'end_positions'</span>: Value(dtype=<span class="hljs-string">'int64'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>)}</pre></div> <h2 class="relative group"><a id="evaluation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Evaluation</span></h2> <p>Evaluation for document question answering requires a significant amount of postprocessing. To avoid taking up too much
of your time, this guide skips the evaluation step. The <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> still calculates the evaluation loss during training so
you’re not completely in the dark about your model’s performance. Extractive question answering is typically evaluated using F1/exact match.
If you’d like to implement it yourself, check out the <a href="https://huggingface.co/course/chapter7/7?fw=pt#postprocessing" rel="nofollow">Question Answering chapter</a>
of the Hugging Face course for inspiration.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Train</span></h2> <p>Congratulations! You’ve successfully navigated the toughest part of this guide and now you are ready to train your own model.
Training involves the following steps:</p> <ul><li>Load the model with <a href="/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModelForDocumentQuestionAnswering">AutoModelForDocumentQuestionAnswering</a> using the same checkpoint as in the preprocessing.</li> <li>Define your training hyperparameters in <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>.</li> <li>Define a function to batch examples together, here the <a href="/docs/transformers/v4.30.0/en/main_classes/data_collator#transformers.DefaultDataCollator">DefaultDataCollator</a> will do just fine</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, and data collator.</li> <li>Call <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForDocumentQuestionAnswering
<span class="hljs-meta">>>> </span>model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint)</pre></div> <p>In the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a> use <code>output_dir</code> to specify where to save your model, and configure hyperparameters as you see fit.
If you wish to share your model with the community, set <code>push_to_hub</code> to <code>True</code> (you must be signed in to Hugging Face to upload your model).
In this case the <code>output_dir</code> will also be the name of the repo where your model checkpoint will be pushed.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrainingArguments
<span class="hljs-meta">>>> </span><span class="hljs-comment"># REPLACE THIS WITH YOUR REPO ID</span>
<span class="hljs-meta">>>> </span>repo_id = <span class="hljs-string">"MariaK/layoutlmv2-base-uncased_finetuned_docvqa"</span>
<span class="hljs-meta">>>> </span>training_args = TrainingArguments(
<span class="hljs-meta">... </span> output_dir=repo_id,
<span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">4</span>,
<span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">20</span>,
<span class="hljs-meta">... </span> save_steps=<span class="hljs-number">200</span>,
<span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">50</span>,
<span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"steps"</span>,
<span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">5e-5</span>,
<span class="hljs-meta">... </span> save_total_limit=<span class="hljs-number">2</span>,
<span class="hljs-meta">... </span> remove_unused_columns=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span>)</pre></div> <p>Define a simple data collator to batch examples together.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DefaultDataCollator
<span class="hljs-meta">>>> </span>data_collator = DefaultDataCollator()</pre></div> <p>Finally, bring everything together, and call <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train">train()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Trainer
<span class="hljs-meta">>>> </span>trainer = Trainer(
<span class="hljs-meta">... </span> model=model,
<span class="hljs-meta">... </span> args=training_args,
<span class="hljs-meta">... </span> data_collator=data_collator,
<span class="hljs-meta">... </span> train_dataset=encoded_train_dataset,
<span class="hljs-meta">... </span> eval_dataset=encoded_test_dataset,
<span class="hljs-meta">... </span> tokenizer=processor,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>trainer.train()</pre></div> <p>To add the final model to 🤗 Hub, create a model card and call <code>push_to_hub</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>trainer.create_model_card()
<span class="hljs-meta">>>> </span>trainer.push_to_hub()</pre></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Inference</span></h2> <p>Now that you have finetuned a LayoutLMv2 model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest
way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.Pipeline">Pipeline</a>.</p> <p>Let’s take an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>example = dataset[<span class="hljs-string">"test"</span>][<span class="hljs-number">2</span>]
<span class="hljs-meta">>>> </span>question = example[<span class="hljs-string">"query"</span>][<span class="hljs-string">"en"</span>]
<span class="hljs-meta">>>> </span>image = example[<span class="hljs-string">"image"</span>]
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(question)
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(example[<span class="hljs-string">"answers"</span>])
<span class="hljs-string">'Who is ‘presiding’ TRRF GENERAL SESSION (PART 1)?'</span>
[<span class="hljs-string">'TRRF Vice President'</span>, <span class="hljs-string">'lee a. waller'</span>]</pre></div> <p>Next, instantiate a pipeline for
document question answering with your model, and pass the image + question combination to it.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline
<span class="hljs-meta">>>> </span>qa_pipeline = pipeline(<span class="hljs-string">"document-question-answering"</span>, model=<span class="hljs-string">"MariaK/layoutlmv2-base-uncased_finetuned_docvqa"</span>)
<span class="hljs-meta">>>> </span>qa_pipeline(image, question)
[{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.9949808120727539</span>,
<span class="hljs-string">'answer'</span>: <span class="hljs-string">'Lee A. Waller'</span>,
<span class="hljs-string">'start'</span>: <span class="hljs-number">55</span>,
<span class="hljs-string">'end'</span>: <span class="hljs-number">57</span>}]</pre></div> <p>You can also manually replicate the results of the pipeline if you’d like:</p> <ol><li>Take an image and a question, prepare them for the model using the processor from your model.</li> <li>Forward the result or preprocessing through the model.</li> <li>The model returns <code>start_logits</code> and <code>end_logits</code>, which indicate which token is at the start of the answer and
which token is at the end of the answer. Both have shape (batch_size, sequence_length).</li> <li>Take an argmax on the last dimension of both the <code>start_logits</code> and <code>end_logits</code> to get the predicted <code>start_idx</code> and <code>end_idx</code>.</li> <li>Decode the answer with the tokenizer.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForDocumentQuestionAnswering
<span class="hljs-meta">>>> </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"MariaK/layoutlmv2-base-uncased_finetuned_docvqa"</span>)
<span class="hljs-meta">>>> </span>model = AutoModelForDocumentQuestionAnswering.from_pretrained(<span class="hljs-string">"MariaK/layoutlmv2-base-uncased_finetuned_docvqa"</span>)
<span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> encoding = processor(image.convert(<span class="hljs-string">"RGB"</span>), question, return_tensors=<span class="hljs-string">"pt"</span>)
<span class="hljs-meta">... </span> outputs = model(**encoding)
<span class="hljs-meta">... </span> start_logits = outputs.start_logits
<span class="hljs-meta">... </span> end_logits = outputs.end_logits
<span class="hljs-meta">... </span> predicted_start_idx = start_logits.argmax(-<span class="hljs-number">1</span>).item()
<span class="hljs-meta">... </span> predicted_end_idx = end_logits.argmax(-<span class="hljs-number">1</span>).item()
<span class="hljs-meta">>>> </span>processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + <span class="hljs-number">1</span>])
<span class="hljs-string">'lee a. waller'</span></pre></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/tasks/image_captioning" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Image captioning</a>
<a href="/docs/transformers/tasks/text-to-speech" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Text to speech<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Document Question Answering","isExpanded":true,"id":"document-question-answering","url":"#document-question-answering","sections":[{"title":"Load the data","isExpanded":true,"id":"load-the-data","url":"#load-the-data"},{"title":"Preprocess the data","isExpanded":true,"id":"preprocess-the-data","url":"#preprocess-the-data","sections":[{"title":"Preprocessing document images","isExpanded":true,"id":"preprocessing-document-images","url":"#preprocessing-document-images"},{"title":"Preprocessing text data","isExpanded":true,"id":"preprocessing-text-data","url":"#preprocessing-text-data"}]},{"title":"Evaluation","isExpanded":true,"id":"evaluation","url":"#evaluation"},{"title":"Train","isExpanded":true,"id":"train","url":"#train"},{"title":"Inference","isExpanded":true,"id":"inference","url":"#inference"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#document-question-answering" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-document-question-answering"><wbr>Document <wbr>Question <wbr>Answering</a> <a href="#load-the-data" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-the-data"><wbr>Load the data</a> <a href="#preprocess-the-data" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess-the-data"><wbr>Preprocess the data</a> <a href="#preprocessing-document-images" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocessing-document-images"><wbr>Preprocessing document images</a> <a href="#preprocessing-text-data" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocessing-text-data"><wbr>Preprocessing text data</a> <a href="#evaluation" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluation"><wbr>Evaluation</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/tasks/document_question_answering" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/tasks/document_question_answering");
}
</script>
<iframe name="__privateStripeMetricsController0430" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Ftasks%2Fdocument_question_answering&title=Document%20Question%20Answering&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:02.581Z |
https://huggingface.co/docs/transformers/model_doc/data2vec-vision | The documentation page MODEL\_DOC/DATA2VEC-VISION doesn’t exist in v4.30.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/data2vec-vision) to redirect to the main version of the documentation. | <html><head></head><body>The documentation page MODEL_DOC/DATA2VEC-VISION doesn’t exist in v4.30.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/data2vec-vision">here</a> to redirect to the main version of the documentation.</body></html> | 2023-06-27T19:55:02.602Z |
|
https://huggingface.co/docs/transformers/tasks/training#train-a-tensorflow-model-with-keras | The documentation page TASKS/TRAINING doesn’t exist in v4.30.0, but exists on the main version. Click [here](/docs/transformers/main/en/tasks/training) to redirect to the main version of the documentation. | <html><head></head><body>The documentation page TASKS/TRAINING doesn’t exist in v4.30.0, but exists on the main version. Click <a href="/docs/transformers/main/en/tasks/training">here</a> to redirect to the main version of the documentation.</body></html> | 2023-06-27T19:55:02.609Z |
|
Video classification | https://huggingface.co/docs/transformers/tasks/video_classification | Video classification is the task of assigning a label or class to an entire video. Videos are expected to have only one class for each video. Video classification models take a video as input and return a prediction about which class the video belongs to. These models can be used to categorize what a video is all about. A real-world application of video classification is action / activity recognition, which is useful for fitness applications. It is also helpful for vision-impaired individuals, especially when they are commuting.
This guide will show you how to:
1. Fine-tune [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) on a subset of the [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) dataset.
2. Use your fine-tuned model for inference.
The task illustrated in this tutorial is supported by the following model architectures:
[TimeSformer](../model_doc/timesformer), [VideoMAE](../model_doc/videomae)
Before you begin, make sure you have all the necessary libraries installed:
```
pip install -q pytorchvideo transformers evaluate```
You will use [PyTorchVideo](https://pytorchvideo.org/) (dubbed `pytorchvideo`) to process and prepare the videos.
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
```
>>> from huggingface_hub import notebook_login
>>> notebook_login()```
## [](#load-ucf101-dataset)Load UCF101 dataset
Start by loading a subset of the [UCF-101 dataset](https://www.crcv.ucf.edu/data/UCF101.php). This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
```
>>> from huggingface_hub import hf_hub_download
>>> hf_dataset_identifier = "sayakpaul/ucf101-subset"
>>> filename = "UCF101_subset.tar.gz"
>>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset")```
After the subset has been downloaded, you need to extract the compressed archive:
```
>>> import tarfile
>>> with tarfile.open(file_path) as t:
... t.extractall(".")```
At a high level, the dataset is organized like so:
```
UCF101_subset/
train/
BandMarching/
video_1.mp4
video_2.mp4
...
Archery
video_1.mp4
video_2.mp4
...
...
val/
BandMarching/
video_1.mp4
video_2.mp4
...
Archery
video_1.mp4
video_2.mp4
...
...
test/
BandMarching/
video_1.mp4
video_2.mp4
...
Archery
video_1.mp4
video_2.mp4
...
...```
The (`sorted`) video paths appear like so:
```
...
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi'
...```
You will notice that there are video clips belonging to the same group / scene where group is denoted by `g` in the video file paths. `v_ApplyEyeMakeup_g07_c04.avi` and `v_ApplyEyeMakeup_g07_c06.avi`, for example.
For the validation and evaluation splits, you wouldn’t want to have video clips from the same group / scene to prevent [data leakage](https://www.kaggle.com/code/alexisbcook/data-leakage). The subset that you are using in this tutorial takes this information into account.
Next up, you will derive the set of labels present in the dataset. Also, create two dictionaries that’ll be helpful when initializing the model:
- `label2id`: maps the class names to integers.
- `id2label`: maps the integers to class names.
```
>>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths})
>>> label2id = {label: i for i, label in enumerate(class_labels)}
>>> id2label = {i: label for label, i in label2id.items()}
>>> print(f"Unique classes: {list(label2id.keys())}.")
```
There are 10 unique classes. For each class, there are 30 videos in the training set.
## [](#load-a-model-to-finetune)Load a model to fine-tune
Instantiate a video classification model from a pretrained checkpoint and its associated image processor. The model’s encoder comes with pre-trained parameters, and the classification head is randomly initialized. The image processor will come in handy when writing the preprocessing pipeline for our dataset.
```
>>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
>>> model_ckpt = "MCG-NJU/videomae-base"
>>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt)
>>> model = VideoMAEForVideoClassification.from_pretrained(
... model_ckpt,
... label2id=label2id,
... id2label=id2label,
... ignore_mismatched_sizes=True,
... )```
While the model is loading, you might notice the following warning:
```
Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight']
- This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.```
The warning is telling us we are throwing away some weights (e.g. the weights and bias of the `classifier` layer) and randomly initializing some others (the weights and bias of a new `classifier` layer). This is expected in this case, because we are adding a new head for which we don’t have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.
**Note** that [this checkpoint](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) leads to better performance on this task as the checkpoint was obtained fine-tuning on a similar downstream task having considerable domain overlap. You can check out [this checkpoint](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) which was obtained by fine-tuning `MCG-NJU/videomae-base-finetuned-kinetics`.
## [](#prepare-the-datasets-for-training)Prepare the datasets for training
For preprocessing the videos, you will leverage the [PyTorchVideo library](https://pytorchvideo.org/). Start by importing the dependencies we need.
```
>>> import pytorchvideo.data
>>> from pytorchvideo.transforms import (
... ApplyTransformToKey,
... Normalize,
... RandomShortSideScale,
... RemoveKey,
... ShortSideScale,
... UniformTemporalSubsample,
... )
>>> from torchvision.transforms import (
... Compose,
... Lambda,
... RandomCrop,
... RandomHorizontalFlip,
... Resize,
... )```
For the training dataset transformations, use a combination of uniform temporal subsampling, pixel normalization, random cropping, and random horizontal flipping. For the validation and evaluation dataset transformations, keep the same transformation chain except for random cropping and horizontal flipping. To learn more about the details of these transformations check out the [official documentation of PyTorchVideo](https://pytorchvideo.org/).
Use the `image_processor` associated with the pre-trained model to obtain the following information:
- Image mean and standard deviation with which the video frame pixels will be normalized.
- Spatial resolution to which the video frames will be resized.
Start by defining some constants.
```
>>> mean = image_processor.image_mean
>>> std = image_processor.image_std
>>> if "shortest_edge" in image_processor.size:
... height = width = image_processor.size["shortest_edge"]
>>> else:
... height = image_processor.size["height"]
... width = image_processor.size["width"]
>>> resize_to = (height, width)
>>> num_frames_to_sample = model.config.num_frames
>>> sample_rate = 4
>>> fps = 30
>>> clip_duration = num_frames_to_sample * sample_rate / fps```
Now, define the dataset-specific transformations and the datasets respectively. Starting with the training set:
```
>>> train_transform = Compose(
... [
... ApplyTransformToKey(
... key="video",
... transform=Compose(
... [
... UniformTemporalSubsample(num_frames_to_sample),
... Lambda(lambda x: x / 255.0),
... Normalize(mean, std),
... RandomShortSideScale(min_size=256, max_size=320),
... RandomCrop(resize_to),
... RandomHorizontalFlip(p=0.5),
... ]
... ),
... ),
... ]
... )
>>> train_dataset = pytorchvideo.data.Ucf101(
... data_path=os.path.join(dataset_root_path, "train"),
... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration),
... decode_audio=False,
... transform=train_transform,
... )```
The same sequence of workflow can be applied to the validation and evaluation sets:
```
>>> val_transform = Compose(
... [
... ApplyTransformToKey(
... key="video",
... transform=Compose(
... [
... UniformTemporalSubsample(num_frames_to_sample),
... Lambda(lambda x: x / 255.0),
... Normalize(mean, std),
... Resize(resize_to),
... ]
... ),
... ),
... ]
... )
>>> val_dataset = pytorchvideo.data.Ucf101(
... data_path=os.path.join(dataset_root_path, "val"),
... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration),
... decode_audio=False,
... transform=val_transform,
... )
>>> test_dataset = pytorchvideo.data.Ucf101(
... data_path=os.path.join(dataset_root_path, "test"),
... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration),
... decode_audio=False,
... transform=val_transform,
... )```
**Note**: The above dataset pipelines are taken from the [official PyTorchVideo example](https://pytorchvideo.org/docs/tutorial_classification#dataset). We’re using the [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) function because it’s tailored for the UCF-101 dataset. Under the hood, it returns a [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) object. `LabeledVideoDataset` class is the base class for all things video in the PyTorchVideo dataset. So, if you want to use a custom dataset not supported off-the-shelf by PyTorchVideo, you can extend the `LabeledVideoDataset` class accordingly. Refer to the `data` API [documentation to](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) learn more. Also, if your dataset follows a similar structure (as shown above), then using the `pytorchvideo.data.Ucf101()` should work just fine.
You can access the `num_videos` argument to know the number of videos in the dataset.
```
>>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos)
```
## [](#visualize-the-preprocessed-video-for-better-debugging)Visualize the preprocessed video for better debugging
```
>>> import imageio
>>> import numpy as np
>>> from IPython.display import Image
>>> def unnormalize_img(img):
... """Un-normalizes the image pixels."""
... img = (img * std) + mean
... img = (img * 255).astype("uint8")
... return img.clip(0, 255)
>>> def create_gif(video_tensor, filename="sample.gif"):
... """Prepares a GIF from a video tensor.
...
... The video tensor is expected to have the following shape:
... (num_frames, num_channels, height, width).
... """
... frames = []
... for video_frame in video_tensor:
... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy())
... frames.append(frame_unnormalized)
... kargs = {"duration": 0.25}
... imageio.mimsave(filename, frames, "GIF", **kargs)
... return filename
>>> def display_gif(video_tensor, gif_name="sample.gif"):
... """Prepares and displays a GIF from a video tensor."""
... video_tensor = video_tensor.permute(1, 0, 2, 3)
... gif_filename = create_gif(video_tensor, gif_name)
... return Image(filename=gif_filename)
>>> sample_video = next(iter(train_dataset))
>>> video_tensor = sample_video["video"]
>>> display_gif(video_tensor)```
![Person playing basketball](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif)
## [](#train-the-model)Train the model
Leverage [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer) from 🤗 Transformers for training the model. To instantiate a `Trainer`, you need to define the training configuration and an evaluation metric. The most important is the [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), which is a class that contains all the attributes to configure the training. It requires an output folder name, which will be used to save the checkpoints of the model. It also helps sync all the information in the model repository on 🤗 Hub.
Most of the training arguments are self-explanatory, but one that is quite important here is `remove_unused_columns=False`. This one will drop any features not used by the model’s call function. By default it’s `True` because usually it’s ideal to drop unused feature columns, making it easier to unpack inputs into the model’s call function. But, in this case, you need the unused features (‘video’ in particular) in order to create `pixel_values` (which is a mandatory key our model expects in its inputs).
```
>>> from transformers import TrainingArguments, Trainer
>>> model_name = model_ckpt.split("/")[-1]
>>> new_model_name = f"{model_name}-finetuned-ucf101-subset"
>>> num_epochs = 4
>>> args = TrainingArguments(
... new_model_name,
... remove_unused_columns=False,
... evaluation_strategy="epoch",
... save_strategy="epoch",
... learning_rate=5e-5,
... per_device_train_batch_size=batch_size,
... per_device_eval_batch_size=batch_size,
... warmup_ratio=0.1,
... logging_steps=10,
... load_best_model_at_end=True,
... metric_for_best_model="accuracy",
... push_to_hub=True,
... max_steps=(train_dataset.num_videos // batch_size) * num_epochs,
... )```
The dataset returned by `pytorchvideo.data.Ucf101()` doesn’t implement the `__len__` method. As such, we must define `max_steps` when instantiating `TrainingArguments`.
Next, you need to define a function to compute the metrics from the predictions, which will use the `metric` you’ll load now. The only preprocessing you have to do is to take the argmax of our predicted logits:
```
import evaluate
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions = np.argmax(eval_pred.predictions, axis=1)
return metric.compute(predictions=predictions, references=eval_pred.label_ids)```
**A note on evaluation**:
In the [VideoMAE paper](https://arxiv.org/abs/2203.12602), the authors use the following evaluation strategy. They evaluate the model on several clips from test videos and apply different crops to those clips and report the aggregate score. However, in the interest of simplicity and brevity, we don’t consider that in this tutorial.
Also, define a `collate_fn`, which will be used to batch examples together. Each batch consists of 2 keys, namely `pixel_values` and `labels`.
```
>>> def collate_fn(examples):
...
... pixel_values = torch.stack(
... [example["video"].permute(1, 0, 2, 3) for example in examples]
... )
... labels = torch.tensor([example["label"] for example in examples])
... return {"pixel_values": pixel_values, "labels": labels}```
Then you just pass all of this along with the datasets to `Trainer`:
```
>>> trainer = Trainer(
... model,
... args,
... train_dataset=train_dataset,
... eval_dataset=val_dataset,
... tokenizer=image_processor,
... compute_metrics=compute_metrics,
... data_collator=collate_fn,
... )```
You might wonder why you passed along the `image_processor` as a tokenizer when you preprocessed the data already. This is only to make sure the image processor configuration file (stored as JSON) will also be uploaded to the repo on the Hub.
Now fine-tune our model by calling the `train` method:
```
>>> train_results = trainer.train()```
Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model:
```
>>> trainer.push_to_hub()```
## [](#inference)Inference
Great, now that you have fine-tuned a model, you can use it for inference!
Load a video for inference:
```
>>> sample_test_video = next(iter(test_dataset))```
![Teams playing basketball](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif)
The simplest way to try out your fine-tuned model for inference is to use it in a [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline). Instantiate a `pipeline` for video classification with your model, and pass your video to it:
```
>>> from transformers import pipeline
>>> video_cls = pipeline(model="my_awesome_video_cls_model")
>>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi")
[{'score': 0.9272987842559814, 'label': 'BasketballDunk'},
{'score': 0.017777055501937866, 'label': 'BabyCrawling'},
{'score': 0.01663011871278286, 'label': 'BalanceBeam'},
{'score': 0.009560945443809032, 'label': 'BandMarching'},
{'score': 0.0068979403004050255, 'label': 'BaseballPitch'}]```
You can also manually replicate the results of the `pipeline` if you’d like.
```
>>> def run_inference(model, video):
...
... perumuted_sample_test_video = video.permute(1, 0, 2, 3)
... inputs = {
... "pixel_values": perumuted_sample_test_video.unsqueeze(0),
... "labels": torch.tensor(
... [sample_test_video["label"]]
... ),
... }
... device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
... inputs = {k: v.to(device) for k, v in inputs.items()}
... model = model.to(device)
...
... with torch.no_grad():
... outputs = model(**inputs)
... logits = outputs.logits
... return logits```
Now, pass your input to the model and return the `logits`:
```
>>> logits = run_inference(trained_model, sample_test_video["video"])```
Decoding the `logits`, we get:
```
>>> predicted_class_idx = logits.argmax(-1).item()
>>> print("Predicted class:", model.config.id2label[predicted_class_idx])
``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Video classification">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/tasks/video_classification">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Video classification</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"video-classification","sections":[{"local":"load-ucf101-dataset","title":"Load UCF101 dataset"},{"local":"load-a-model-to-finetune","title":"Load a model to fine-tune"},{"local":"prepare-the-datasets-for-training","title":"Prepare the datasets for training"},{"local":"visualize-the-preprocessed-video-for-better-debugging","title":"Visualize the preprocessed video for better debugging "},{"local":"train-the-model","title":"Train the model "},{"local":"inference","title":"Inference"}],"title":"Video classification"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":true,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","isExpanded":true,"id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"tasks/video_classification","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Video classification"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Video classification</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/image_classification">Image classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/semantic_segmentation">Semantic segmentation </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/tasks/video_classification">Video classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/object_detection">Object detection </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/zero_shot_object_detection">Zero-shot object detection </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/zero_shot_image_classification">Zero-shot image classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="video-classification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#video-classification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Video classification</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p>Video classification is the task of assigning a label or class to an entire video. Videos are expected to have only one class for each video. Video classification models take a video as input and return a prediction about which class the video belongs to. These models can be used to categorize what a video is all about. A real-world application of video classification is action / activity recognition, which is useful for fitness applications. It is also helpful for vision-impaired individuals, especially when they are commuting.</p> <p>This guide will show you how to:</p> <ol><li>Fine-tune <a href="https://huggingface.co/docs/transformers/main/en/model_doc/videomae" rel="nofollow">VideoMAE</a> on a subset of the <a href="https://www.crcv.ucf.edu/data/UCF101.php" rel="nofollow">UCF101</a> dataset.</li> <li>Use your fine-tuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures:
<p><a href="../model_doc/timesformer">TimeSformer</a>, <a href="../model_doc/videomae">VideoMAE</a></p></div> <p>Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install -q pytorchvideo transformers evaluate</pre></div> <p>You will use <a href="https://pytorchvideo.org/" rel="nofollow">PyTorchVideo</a> (dubbed <code>pytorchvideo</code>) to process and prepare the videos.</p> <p>We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login
<span class="hljs-meta">>>> </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-ucf101-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-ucf101-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load UCF101 dataset</span></h2> <p>Start by loading a subset of the <a href="https://www.crcv.ucf.edu/data/UCF101.php" rel="nofollow">UCF-101 dataset</a>. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> hf_hub_download
<span class="hljs-meta">>>> </span>hf_dataset_identifier = <span class="hljs-string">"sayakpaul/ucf101-subset"</span>
<span class="hljs-meta">>>> </span>filename = <span class="hljs-string">"UCF101_subset.tar.gz"</span>
<span class="hljs-meta">>>> </span>file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type=<span class="hljs-string">"dataset"</span>)</pre></div> <p>After the subset has been downloaded, you need to extract the compressed archive:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> tarfile
<span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> tarfile.<span class="hljs-built_in">open</span>(file_path) <span class="hljs-keyword">as</span> t:
<span class="hljs-meta">... </span> t.extractall(<span class="hljs-string">"."</span>)</pre></div> <p>At a high level, the dataset is organized like so:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>UCF101_subset/
train/
BandMarching/
video_1.mp4
video_2.mp4
...
Archery
video_1.mp4
video_2.mp4
...
...
val/
BandMarching/
video_1.mp4
video_2.mp4
...
Archery
video_1.mp4
video_2.mp4
...
...
<span class="hljs-built_in">test</span>/
BandMarching/
video_1.mp4
video_2.mp4
...
Archery
video_1.mp4
video_2.mp4
...
...</pre></div> <p>The (<code>sorted</code>) video paths appear like so:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>...
<span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi'</span>,
<span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi'</span>,
<span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi'</span>,
<span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi'</span>,
<span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi'</span>
...</pre></div> <p>You will notice that there are video clips belonging to the same group / scene where group is denoted by <code>g</code> in the video file paths. <code>v_ApplyEyeMakeup_g07_c04.avi</code> and <code>v_ApplyEyeMakeup_g07_c06.avi</code>, for example.</p> <p>For the validation and evaluation splits, you wouldn’t want to have video clips from the same group / scene to prevent <a href="https://www.kaggle.com/code/alexisbcook/data-leakage" rel="nofollow">data leakage</a>. The subset that you are using in this tutorial takes this information into account.</p> <p>Next up, you will derive the set of labels present in the dataset. Also, create two dictionaries that’ll be helpful when initializing the model:</p> <ul><li><code>label2id</code>: maps the class names to integers.</li> <li><code>id2label</code>: maps the integers to class names.</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>class_labels = <span class="hljs-built_in">sorted</span>({<span class="hljs-built_in">str</span>(path).split(<span class="hljs-string">"/"</span>)[<span class="hljs-number">2</span>] <span class="hljs-keyword">for</span> path <span class="hljs-keyword">in</span> all_video_file_paths})
<span class="hljs-meta">>>> </span>label2id = {label: i <span class="hljs-keyword">for</span> i, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(class_labels)}
<span class="hljs-meta">>>> </span>id2label = {i: label <span class="hljs-keyword">for</span> label, i <span class="hljs-keyword">in</span> label2id.items()}
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(<span class="hljs-string">f"Unique classes: <span class="hljs-subst">{<span class="hljs-built_in">list</span>(label2id.keys())}</span>."</span>)
<span class="hljs-comment"># Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress'].</span></pre></div> <p>There are 10 unique classes. For each class, there are 30 videos in the training set.</p> <h2 class="relative group"><a id="load-a-model-to-finetune" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-a-model-to-finetune"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load a model to fine-tune</span></h2> <p>Instantiate a video classification model from a pretrained checkpoint and its associated image processor. The model’s encoder comes with pre-trained parameters, and the classification head is randomly initialized. The image processor will come in handy when writing the preprocessing pipeline for our dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VideoMAEImageProcessor, VideoMAEForVideoClassification
<span class="hljs-meta">>>> </span>model_ckpt = <span class="hljs-string">"MCG-NJU/videomae-base"</span>
<span class="hljs-meta">>>> </span>image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt)
<span class="hljs-meta">>>> </span>model = VideoMAEForVideoClassification.from_pretrained(
<span class="hljs-meta">... </span> model_ckpt,
<span class="hljs-meta">... </span> label2id=label2id,
<span class="hljs-meta">... </span> id2label=id2label,
<span class="hljs-meta">... </span> ignore_mismatched_sizes=<span class="hljs-literal">True</span>, <span class="hljs-comment"># provide this in case you're planning to fine-tune an already fine-tuned checkpoint</span>
<span class="hljs-meta">... </span>)</pre></div> <p>While the model is loading, you might notice the following warning:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., <span class="hljs-string">'decoder.decoder_layers.1.attention.output.dense.bias'</span>, <span class="hljs-string">'decoder.decoder_layers.2.attention.attention.key.weight'</span>]
- This IS expected <span class="hljs-keyword">if</span> you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected <span class="hljs-keyword">if</span> you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: [<span class="hljs-string">'classifier.bias'</span>, <span class="hljs-string">'classifier.weight'</span>]
You should probably TRAIN this model on a down-stream task to be able to use it <span class="hljs-keyword">for</span> predictions and inference.</pre></div> <p>The warning is telling us we are throwing away some weights (e.g. the weights and bias of the <code>classifier</code> layer) and randomly initializing some others (the weights and bias of a new <code>classifier</code> layer). This is expected in this case, because we are adding a new head for which we don’t have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.</p> <p><strong>Note</strong> that <a href="https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics" rel="nofollow">this checkpoint</a> leads to better performance on this task as the checkpoint was obtained fine-tuning on a similar downstream task having considerable domain overlap. You can check out <a href="https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset" rel="nofollow">this checkpoint</a> which was obtained by fine-tuning <code>MCG-NJU/videomae-base-finetuned-kinetics</code>.</p> <h2 class="relative group"><a id="prepare-the-datasets-for-training" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#prepare-the-datasets-for-training"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Prepare the datasets for training</span></h2> <p>For preprocessing the videos, you will leverage the <a href="https://pytorchvideo.org/" rel="nofollow">PyTorchVideo library</a>. Start by importing the dependencies we need.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> pytorchvideo.data
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> pytorchvideo.transforms <span class="hljs-keyword">import</span> (
<span class="hljs-meta">... </span> ApplyTransformToKey,
<span class="hljs-meta">... </span> Normalize,
<span class="hljs-meta">... </span> RandomShortSideScale,
<span class="hljs-meta">... </span> RemoveKey,
<span class="hljs-meta">... </span> ShortSideScale,
<span class="hljs-meta">... </span> UniformTemporalSubsample,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> torchvision.transforms <span class="hljs-keyword">import</span> (
<span class="hljs-meta">... </span> Compose,
<span class="hljs-meta">... </span> Lambda,
<span class="hljs-meta">... </span> RandomCrop,
<span class="hljs-meta">... </span> RandomHorizontalFlip,
<span class="hljs-meta">... </span> Resize,
<span class="hljs-meta">... </span>)</pre></div> <p>For the training dataset transformations, use a combination of uniform temporal subsampling, pixel normalization, random cropping, and random horizontal flipping. For the validation and evaluation dataset transformations, keep the same transformation chain except for random cropping and horizontal flipping. To learn more about the details of these transformations check out the <a href="https://pytorchvideo.org" rel="nofollow">official documentation of PyTorchVideo</a>.</p> <p>Use the <code>image_processor</code> associated with the pre-trained model to obtain the following information:</p> <ul><li>Image mean and standard deviation with which the video frame pixels will be normalized.</li> <li>Spatial resolution to which the video frames will be resized.</li></ul> <p>Start by defining some constants.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>mean = image_processor.image_mean
<span class="hljs-meta">>>> </span>std = image_processor.image_std
<span class="hljs-meta">>>> </span><span class="hljs-keyword">if</span> <span class="hljs-string">"shortest_edge"</span> <span class="hljs-keyword">in</span> image_processor.size:
<span class="hljs-meta">... </span> height = width = image_processor.size[<span class="hljs-string">"shortest_edge"</span>]
<span class="hljs-meta">>>> </span><span class="hljs-keyword">else</span>:
<span class="hljs-meta">... </span> height = image_processor.size[<span class="hljs-string">"height"</span>]
<span class="hljs-meta">... </span> width = image_processor.size[<span class="hljs-string">"width"</span>]
<span class="hljs-meta">>>> </span>resize_to = (height, width)
<span class="hljs-meta">>>> </span>num_frames_to_sample = model.config.num_frames
<span class="hljs-meta">>>> </span>sample_rate = <span class="hljs-number">4</span>
<span class="hljs-meta">>>> </span>fps = <span class="hljs-number">30</span>
<span class="hljs-meta">>>> </span>clip_duration = num_frames_to_sample * sample_rate / fps</pre></div> <p>Now, define the dataset-specific transformations and the datasets respectively. Starting with the training set:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>train_transform = Compose(
<span class="hljs-meta">... </span> [
<span class="hljs-meta">... </span> ApplyTransformToKey(
<span class="hljs-meta">... </span> key=<span class="hljs-string">"video"</span>,
<span class="hljs-meta">... </span> transform=Compose(
<span class="hljs-meta">... </span> [
<span class="hljs-meta">... </span> UniformTemporalSubsample(num_frames_to_sample),
<span class="hljs-meta">... </span> Lambda(<span class="hljs-keyword">lambda</span> x: x / <span class="hljs-number">255.0</span>),
<span class="hljs-meta">... </span> Normalize(mean, std),
<span class="hljs-meta">... </span> RandomShortSideScale(min_size=<span class="hljs-number">256</span>, max_size=<span class="hljs-number">320</span>),
<span class="hljs-meta">... </span> RandomCrop(resize_to),
<span class="hljs-meta">... </span> RandomHorizontalFlip(p=<span class="hljs-number">0.5</span>),
<span class="hljs-meta">... </span> ]
<span class="hljs-meta">... </span> ),
<span class="hljs-meta">... </span> ),
<span class="hljs-meta">... </span> ]
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>train_dataset = pytorchvideo.data.Ucf101(
<span class="hljs-meta">... </span> data_path=os.path.join(dataset_root_path, <span class="hljs-string">"train"</span>),
<span class="hljs-meta">... </span> clip_sampler=pytorchvideo.data.make_clip_sampler(<span class="hljs-string">"random"</span>, clip_duration),
<span class="hljs-meta">... </span> decode_audio=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> transform=train_transform,
<span class="hljs-meta">... </span>)</pre></div> <p>The same sequence of workflow can be applied to the validation and evaluation sets:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>val_transform = Compose(
<span class="hljs-meta">... </span> [
<span class="hljs-meta">... </span> ApplyTransformToKey(
<span class="hljs-meta">... </span> key=<span class="hljs-string">"video"</span>,
<span class="hljs-meta">... </span> transform=Compose(
<span class="hljs-meta">... </span> [
<span class="hljs-meta">... </span> UniformTemporalSubsample(num_frames_to_sample),
<span class="hljs-meta">... </span> Lambda(<span class="hljs-keyword">lambda</span> x: x / <span class="hljs-number">255.0</span>),
<span class="hljs-meta">... </span> Normalize(mean, std),
<span class="hljs-meta">... </span> Resize(resize_to),
<span class="hljs-meta">... </span> ]
<span class="hljs-meta">... </span> ),
<span class="hljs-meta">... </span> ),
<span class="hljs-meta">... </span> ]
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>val_dataset = pytorchvideo.data.Ucf101(
<span class="hljs-meta">... </span> data_path=os.path.join(dataset_root_path, <span class="hljs-string">"val"</span>),
<span class="hljs-meta">... </span> clip_sampler=pytorchvideo.data.make_clip_sampler(<span class="hljs-string">"uniform"</span>, clip_duration),
<span class="hljs-meta">... </span> decode_audio=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> transform=val_transform,
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>test_dataset = pytorchvideo.data.Ucf101(
<span class="hljs-meta">... </span> data_path=os.path.join(dataset_root_path, <span class="hljs-string">"test"</span>),
<span class="hljs-meta">... </span> clip_sampler=pytorchvideo.data.make_clip_sampler(<span class="hljs-string">"uniform"</span>, clip_duration),
<span class="hljs-meta">... </span> decode_audio=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> transform=val_transform,
<span class="hljs-meta">... </span>)</pre></div> <p><strong>Note</strong>: The above dataset pipelines are taken from the <a href="https://pytorchvideo.org/docs/tutorial_classification#dataset" rel="nofollow">official PyTorchVideo example</a>. We’re using the <a href="https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101" rel="nofollow"><code>pytorchvideo.data.Ucf101()</code></a> function because it’s tailored for the UCF-101 dataset. Under the hood, it returns a <a href="https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset" rel="nofollow"><code>pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset</code></a> object. <code>LabeledVideoDataset</code> class is the base class for all things video in the PyTorchVideo dataset. So, if you want to use a custom dataset not supported off-the-shelf by PyTorchVideo, you can extend the <code>LabeledVideoDataset</code> class accordingly. Refer to the <code>data</code> API <a href="https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html" rel="nofollow">documentation to</a> learn more. Also, if your dataset follows a similar structure (as shown above), then using the <code>pytorchvideo.data.Ucf101()</code> should work just fine.</p> <p>You can access the <code>num_videos</code> argument to know the number of videos in the dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos)
<span class="hljs-comment"># (300, 30, 75)</span></pre></div> <h2 class="relative group"><a id="visualize-the-preprocessed-video-for-better-debugging" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#visualize-the-preprocessed-video-for-better-debugging"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Visualize the preprocessed video for better debugging</span></h2> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> imageio
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> IPython.display <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">unnormalize_img</span>(<span class="hljs-params">img</span>):
<span class="hljs-meta">... </span> <span class="hljs-string">"""Un-normalizes the image pixels."""</span>
<span class="hljs-meta">... </span> img = (img * std) + mean
<span class="hljs-meta">... </span> img = (img * <span class="hljs-number">255</span>).astype(<span class="hljs-string">"uint8"</span>)
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> img.clip(<span class="hljs-number">0</span>, <span class="hljs-number">255</span>)
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">create_gif</span>(<span class="hljs-params">video_tensor, filename=<span class="hljs-string">"sample.gif"</span></span>):
<span class="hljs-meta">... </span> <span class="hljs-string">"""Prepares a GIF from a video tensor.
<span class="hljs-meta">... </span>
<span class="hljs-meta">... </span> The video tensor is expected to have the following shape:
<span class="hljs-meta">... </span> (num_frames, num_channels, height, width).
<span class="hljs-meta">... </span> """</span>
<span class="hljs-meta">... </span> frames = []
<span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> video_frame <span class="hljs-keyword">in</span> video_tensor:
<span class="hljs-meta">... </span> frame_unnormalized = unnormalize_img(video_frame.permute(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">0</span>).numpy())
<span class="hljs-meta">... </span> frames.append(frame_unnormalized)
<span class="hljs-meta">... </span> kargs = {<span class="hljs-string">"duration"</span>: <span class="hljs-number">0.25</span>}
<span class="hljs-meta">... </span> imageio.mimsave(filename, frames, <span class="hljs-string">"GIF"</span>, **kargs)
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> filename
<span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">display_gif</span>(<span class="hljs-params">video_tensor, gif_name=<span class="hljs-string">"sample.gif"</span></span>):
<span class="hljs-meta">... </span> <span class="hljs-string">"""Prepares and displays a GIF from a video tensor."""</span>
<span class="hljs-meta">... </span> video_tensor = video_tensor.permute(<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>)
<span class="hljs-meta">... </span> gif_filename = create_gif(video_tensor, gif_name)
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> Image(filename=gif_filename)
<span class="hljs-meta">>>> </span>sample_video = <span class="hljs-built_in">next</span>(<span class="hljs-built_in">iter</span>(train_dataset))
<span class="hljs-meta">>>> </span>video_tensor = sample_video[<span class="hljs-string">"video"</span>]
<span class="hljs-meta">>>> </span>display_gif(video_tensor)</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"></div> <h2 class="relative group"><a id="train-the-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train-the-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Train the model</span></h2> <p>Leverage <a href="https://huggingface.co/docs/transformers/main_classes/trainer" rel="nofollow"><code>Trainer</code></a> from 🤗 Transformers for training the model. To instantiate a <code>Trainer</code>, you need to define the training configuration and an evaluation metric. The most important is the <a href="https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments" rel="nofollow"><code>TrainingArguments</code></a>, which is a class that contains all the attributes to configure the training. It requires an output folder name, which will be used to save the checkpoints of the model. It also helps sync all the information in the model repository on 🤗 Hub.</p> <p>Most of the training arguments are self-explanatory, but one that is quite important here is <code>remove_unused_columns=False</code>. This one will drop any features not used by the model’s call function. By default it’s <code>True</code> because usually it’s ideal to drop unused feature columns, making it easier to unpack inputs into the model’s call function. But, in this case, you need the unused features (‘video’ in particular) in order to create <code>pixel_values</code> (which is a mandatory key our model expects in its inputs).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrainingArguments, Trainer
<span class="hljs-meta">>>> </span>model_name = model_ckpt.split(<span class="hljs-string">"/"</span>)[-<span class="hljs-number">1</span>]
<span class="hljs-meta">>>> </span>new_model_name = <span class="hljs-string">f"<span class="hljs-subst">{model_name}</span>-finetuned-ucf101-subset"</span>
<span class="hljs-meta">>>> </span>num_epochs = <span class="hljs-number">4</span>
<span class="hljs-meta">>>> </span>args = TrainingArguments(
<span class="hljs-meta">... </span> new_model_name,
<span class="hljs-meta">... </span> remove_unused_columns=<span class="hljs-literal">False</span>,
<span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>,
<span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"epoch"</span>,
<span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">5e-5</span>,
<span class="hljs-meta">... </span> per_device_train_batch_size=batch_size,
<span class="hljs-meta">... </span> per_device_eval_batch_size=batch_size,
<span class="hljs-meta">... </span> warmup_ratio=<span class="hljs-number">0.1</span>,
<span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">10</span>,
<span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span> metric_for_best_model=<span class="hljs-string">"accuracy"</span>,
<span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>,
<span class="hljs-meta">... </span> max_steps=(train_dataset.num_videos // batch_size) * num_epochs,
<span class="hljs-meta">... </span>)</pre></div> <p>The dataset returned by <code>pytorchvideo.data.Ucf101()</code> doesn’t implement the <code>__len__</code> method. As such, we must define <code>max_steps</code> when instantiating <code>TrainingArguments</code>.</p> <p>Next, you need to define a function to compute the metrics from the predictions, which will use the <code>metric</code> you’ll load now. The only preprocessing you have to do is to take the argmax of our predicted logits:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">import</span> evaluate
metric = evaluate.load(<span class="hljs-string">"accuracy"</span>)
<span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>):
predictions = np.argmax(eval_pred.predictions, axis=<span class="hljs-number">1</span>)
<span class="hljs-keyword">return</span> metric.compute(predictions=predictions, references=eval_pred.label_ids)</pre></div> <p><strong>A note on evaluation</strong>:</p> <p>In the <a href="https://arxiv.org/abs/2203.12602" rel="nofollow">VideoMAE paper</a>, the authors use the following evaluation strategy. They evaluate the model on several clips from test videos and apply different crops to those clips and report the aggregate score. However, in the interest of simplicity and brevity, we don’t consider that in this tutorial.</p> <p>Also, define a <code>collate_fn</code>, which will be used to batch examples together. Each batch consists of 2 keys, namely <code>pixel_values</code> and <code>labels</code>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">collate_fn</span>(<span class="hljs-params">examples</span>):
<span class="hljs-meta">... </span> <span class="hljs-comment"># permute to (num_frames, num_channels, height, width)</span>
<span class="hljs-meta">... </span> pixel_values = torch.stack(
<span class="hljs-meta">... </span> [example[<span class="hljs-string">"video"</span>].permute(<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples]
<span class="hljs-meta">... </span> )
<span class="hljs-meta">... </span> labels = torch.tensor([example[<span class="hljs-string">"label"</span>] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples])
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {<span class="hljs-string">"pixel_values"</span>: pixel_values, <span class="hljs-string">"labels"</span>: labels}</pre></div> <p>Then you just pass all of this along with the datasets to <code>Trainer</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>trainer = Trainer(
<span class="hljs-meta">... </span> model,
<span class="hljs-meta">... </span> args,
<span class="hljs-meta">... </span> train_dataset=train_dataset,
<span class="hljs-meta">... </span> eval_dataset=val_dataset,
<span class="hljs-meta">... </span> tokenizer=image_processor,
<span class="hljs-meta">... </span> compute_metrics=compute_metrics,
<span class="hljs-meta">... </span> data_collator=collate_fn,
<span class="hljs-meta">... </span>)</pre></div> <p>You might wonder why you passed along the <code>image_processor</code> as a tokenizer when you preprocessed the data already. This is only to make sure the image processor configuration file (stored as JSON) will also be uploaded to the repo on the Hub.</p> <p>Now fine-tune our model by calling the <code>train</code> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>train_results = trainer.train()</pre></div> <p>Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>trainer.push_to_hub()</pre></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Inference</span></h2> <p>Great, now that you have fine-tuned a model, you can use it for inference!</p> <p>Load a video for inference:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>sample_test_video = <span class="hljs-built_in">next</span>(<span class="hljs-built_in">iter</span>(test_dataset))</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"></div> <p>The simplest way to try out your fine-tuned model for inference is to use it in a <a href="https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline" rel="nofollow"><code>pipeline</code></a>. Instantiate a <code>pipeline</code> for video classification with your model, and pass your video to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline
<span class="hljs-meta">>>> </span>video_cls = pipeline(model=<span class="hljs-string">"my_awesome_video_cls_model"</span>)
<span class="hljs-meta">>>> </span>video_cls(<span class="hljs-string">"https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi"</span>)
[{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.9272987842559814</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BasketballDunk'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.017777055501937866</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BabyCrawling'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.01663011871278286</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BalanceBeam'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.009560945443809032</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BandMarching'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.0068979403004050255</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BaseballPitch'</span>}]</pre></div> <p>You can also manually replicate the results of the <code>pipeline</code> if you’d like.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">run_inference</span>(<span class="hljs-params">model, video</span>):
<span class="hljs-meta">... </span> <span class="hljs-comment"># (num_frames, num_channels, height, width)</span>
<span class="hljs-meta">... </span> perumuted_sample_test_video = video.permute(<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>)
<span class="hljs-meta">... </span> inputs = {
<span class="hljs-meta">... </span> <span class="hljs-string">"pixel_values"</span>: perumuted_sample_test_video.unsqueeze(<span class="hljs-number">0</span>),
<span class="hljs-meta">... </span> <span class="hljs-string">"labels"</span>: torch.tensor(
<span class="hljs-meta">... </span> [sample_test_video[<span class="hljs-string">"label"</span>]]
<span class="hljs-meta">... </span> ), <span class="hljs-comment"># this can be skipped if you don't have labels available.</span>
<span class="hljs-meta">... </span> }
<span class="hljs-meta">... </span> device = torch.device(<span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span>)
<span class="hljs-meta">... </span> inputs = {k: v.to(device) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> inputs.items()}
<span class="hljs-meta">... </span> model = model.to(device)
<span class="hljs-meta">... </span> <span class="hljs-comment"># forward pass</span>
<span class="hljs-meta">... </span> <span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> outputs = model(**inputs)
<span class="hljs-meta">... </span> logits = outputs.logits
<span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> logits</pre></div> <p>Now, pass your input to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>>>> logits = run<span class="hljs-constructor">_inference(<span class="hljs-params">trained_model</span>, <span class="hljs-params">sample_test_video</span>[<span class="hljs-string">"video"</span>])</span></pre></div> <p>Decoding the <code>logits</code>, we get:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>predicted_class_idx = logits.argmax(-<span class="hljs-number">1</span>).item()
<span class="hljs-meta">>>> </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"Predicted class:"</span>, model.config.id2label[predicted_class_idx])
<span class="hljs-comment"># Predicted class: BasketballDunk</span></pre></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/tasks/semantic_segmentation" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Semantic segmentation</a>
<a href="/docs/transformers/tasks/object_detection" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Object detection<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Video classification","isExpanded":true,"id":"video-classification","url":"#video-classification","sections":[{"title":"Load UCF101 dataset","isExpanded":true,"id":"load-ucf101-dataset","url":"#load-ucf101-dataset"},{"title":"Load a model to fine-tune","isExpanded":true,"id":"load-a-model-to-finetune","url":"#load-a-model-to-finetune"},{"title":"Prepare the datasets for training","isExpanded":true,"id":"prepare-the-datasets-for-training","url":"#prepare-the-datasets-for-training"},{"title":"Visualize the preprocessed video for better debugging ","isExpanded":true,"id":"visualize-the-preprocessed-video-for-better-debugging","url":"#visualize-the-preprocessed-video-for-better-debugging"},{"title":"Train the model ","isExpanded":true,"id":"train-the-model","url":"#train-the-model"},{"title":"Inference","isExpanded":true,"id":"inference","url":"#inference"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#video-classification" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-video-classification"><wbr>Video classification</a> <a href="#load-ucf101-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-ucf101-dataset"><wbr>Load UC<wbr>F101 dataset</a> <a href="#load-a-model-to-finetune" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-a-model-to-finetune"><wbr>Load a model to fine-tune</a> <a href="#prepare-the-datasets-for-training" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-prepare-the-datasets-for-training"><wbr>Prepare the datasets for training</a> <a href="#visualize-the-preprocessed-video-for-better-debugging" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-visualize-the-preprocessed-video-for-better-debugging"><wbr>Visualize the preprocessed video for better debugging </a> <a href="#train-the-model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train-the-model"><wbr>Train the model </a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/tasks/video_classification" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/tasks/video_classification");
}
</script>
<iframe name="__privateStripeMetricsController7680" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Ftasks%2Fvideo_classification&title=Video%20classification&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:02.891Z |
Image captioning | https://huggingface.co/docs/transformers/tasks/image_captioning | Image captioning is the task of predicting a caption for a given image. Common real world applications of it include aiding visually impaired people that can help them navigate through different situations. Therefore, image captioning helps to improve content accessibility for people by describing images to them.
This guide will show you how to:
- Fine-tune an image captioning model.
- Use the fine-tuned model for inference.
Before you begin, make sure you have all the necessary libraries installed:
```
pip install transformers datasets evaluate -q
pip install jiwer -q```
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
```
from huggingface_hub import notebook_login
notebook_login()```
## [](#load-the-pokmon-blip-captions-dataset)Load the Pokémon BLIP captions dataset
Use the 🤗 Dataset library to load a dataset that consists of {image-caption} pairs. To create your own image captioning dataset in PyTorch, you can follow [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb).
```
from datasets import load_dataset
ds = load_dataset("lambdalabs/pokemon-blip-captions")
ds```
```
DatasetDict({
train: Dataset({
features: ['image', 'text'],
num_rows: 833
})
})```
The dataset has two features, `image` and `text`.
Many image captioning datasets contain multiple captions per image. In those cases, a common strategy is to randomly sample a caption amongst the available ones during training.
Split the dataset’s train split into a train and test set with the \[~datasets.Dataset.train\_test\_split\] method:
```
ds = ds["train"].train_test_split(test_size=0.1)
train_ds = ds["train"]
test_ds = ds["test"]```
Let’s visualize a couple of samples from the training set.
```
from textwrap import wrap
import matplotlib.pyplot as plt
import numpy as np
def plot_images(images, captions):
plt.figure(figsize=(20, 20))
for i in range(len(images)):
ax = plt.subplot(1, len(images), i + 1)
caption = captions[i]
caption = "\n".join(wrap(caption, 12))
plt.title(caption)
plt.imshow(images[i])
plt.axis("off")
sample_images_to_visualize = [np.array(train_ds[i]["image"]) for i in range(5)]
sample_captions = [train_ds[i]["text"] for i in range(5)]
plot_images(sample_images_to_visualize, sample_captions)```
![Sample training images](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_training_images_image_cap.png)
## [](#preprocess-the-dataset)Preprocess the dataset
Since the dataset has two modalities (image and text), the pre-processing pipeline will preprocess images and the captions.
To do so, load the processor class associated with the model you are about to fine-tune.
```
from transformers import AutoProcessor
checkpoint = "microsoft/git-base"
processor = AutoProcessor.from_pretrained(checkpoint)```
The processor will internally pre-process the image (which includes resizing, and pixel scaling) and tokenize the caption.
```
def transforms(example_batch):
images = [x for x in example_batch["image"]]
captions = [x for x in example_batch["text"]]
inputs = processor(images=images, text=captions, padding="max_length")
inputs.update({"labels": inputs["input_ids"]})
return inputs
train_ds.set_transform(transforms)
test_ds.set_transform(transforms)```
With the dataset ready, you can now set up the model for fine-tuning.
## [](#load-a-base-model)Load a base model
Load the [“microsoft/git-base”](https://huggingface.co/microsoft/git-base) into a [`AutoModelForCausalLM`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) object.
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(checkpoint)```
## [](#evaluate)Evaluate
Image captioning models are typically evaluated with the [Rouge Score](https://huggingface.co/spaces/evaluate-metric/rouge) or [Word Error Rate](https://huggingface.co/spaces/evaluate-metric/wer). For this guide, you will use the Word Error Rate (WER).
We use the 🤗 Evaluate library to do so. For potential limitations and other gotchas of the WER, refer to [this guide](https://huggingface.co/spaces/evaluate-metric/wer).
```
from evaluate import load
import torch
wer = load("wer")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predicted = logits.argmax(-1)
decoded_labels = processor.batch_decode(labels, skip_special_tokens=True)
decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=True)
wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels)
return {"wer_score": wer_score}```
## [](#train)Train!
Now, you are ready to start fine-tuning the model. You will use the 🤗 [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer) for this.
First, define the training arguments using [TrainingArguments](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments).
```
from transformers import TrainingArguments, Trainer
model_name = checkpoint.split("/")[1]
training_args = TrainingArguments(
output_dir=f"{model_name}-pokemon",
learning_rate=5e-5,
num_train_epochs=50,
fp16=True,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
gradient_accumulation_steps=2,
save_total_limit=3,
evaluation_strategy="steps",
eval_steps=50,
save_strategy="steps",
save_steps=50,
logging_steps=50,
remove_unused_columns=False,
push_to_hub=True,
label_names=["labels"],
load_best_model_at_end=True,
)```
Then pass them along with the datasets and the model to 🤗 Trainer.
```
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)```
To start training, simply call [train()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train) on the [Trainer](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer) object.
You should see the training loss drop smoothly as training progresses.
Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model:
## [](#inference)Inference
Take a sample image from `test_ds` to test the model.
```
from PIL import Image
import requests
url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png"
image = Image.open(requests.get(url, stream=True).raw)
image```
![Test image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/test_image_image_cap.png)
Prepare image for the model.
```
device = "cuda" if torch.cuda.is_available() else "cpu"
inputs = processor(images=image, return_tensors="pt").to(device)
pixel_values = inputs.pixel_values```
Call `generate` and decode the predictions.
```
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)```
```
a drawing of a pink and blue pokemon```
Looks like the fine-tuned model generated a pretty good caption! | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Image captioning">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/tasks/image_captioning">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Image captioning</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"image-captioning","sections":[{"local":"load-the-pokmon-blip-captions-dataset","title":"Load the Pokémon BLIP captions dataset"},{"local":"preprocess-the-dataset","title":"Preprocess the dataset"},{"local":"load-a-base-model","title":"Load a base model"},{"local":"evaluate","title":"Evaluate"},{"local":"train","title":"Train!"},{"local":"inference","title":"Inference"}],"title":"Image captioning"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":false,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":true,"sections":[{"title":"Image captioning","isExpanded":true,"id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"tasks/image_captioning","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Image captioning"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Image captioning</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="flex flex-col"><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/tasks/image_captioning">Image captioning </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/document_question_answering">Document Question Answering </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/text-to-speech">Text to speech </a> </div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="image-captioning" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#image-captioning"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Image captioning</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p>Image captioning is the task of predicting a caption for a given image. Common real world applications of it include
aiding visually impaired people that can help them navigate through different situations. Therefore, image captioning
helps to improve content accessibility for people by describing images to them.</p> <p>This guide will show you how to:</p> <ul><li>Fine-tune an image captioning model.</li> <li>Use the fine-tuned model for inference.</li></ul> <p>Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install transformers datasets evaluate -q
pip install jiwer -q</pre></div> <p>We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login
notebook_login()</pre></div> <h2 class="relative group"><a id="load-the-pokmon-blip-captions-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-the-pokmon-blip-captions-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load the Pokémon BLIP captions dataset</span></h2> <p>Use the 🤗 Dataset library to load a dataset that consists of {image-caption} pairs. To create your own image captioning dataset
in PyTorch, you can follow <a href="https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb" rel="nofollow">this notebook</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
ds = load_dataset(<span class="hljs-string">"lambdalabs/pokemon-blip-captions"</span>)
ds</pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>DatasetDict({
train: Dataset({
features: [<span class="hljs-string">'image'</span>, <span class="hljs-string">'text'</span>],
num_rows: 833
})
})</pre></div> <p>The dataset has two features, <code>image</code> and <code>text</code>.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Many image captioning datasets contain multiple captions per image. In those cases, a common strategy is to randomly sample a caption amongst the available ones during training.</p></div> <p>Split the dataset’s train split into a train and test set with the [~datasets.Dataset.train_test_split] method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>ds = ds[<span class="hljs-string">"train"</span>].train_test_split(test_size=<span class="hljs-number">0.1</span>)
train_ds = ds[<span class="hljs-string">"train"</span>]
test_ds = ds[<span class="hljs-string">"test"</span>]</pre></div> <p>Let’s visualize a couple of samples from the training set.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">from</span> textwrap <span class="hljs-keyword">import</span> wrap
<span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-keyword">def</span> <span class="hljs-title function_">plot_images</span>(<span class="hljs-params">images, captions</span>):
plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>))
<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(images)):
ax = plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-built_in">len</span>(images), i + <span class="hljs-number">1</span>)
caption = captions[i]
caption = <span class="hljs-string">"\n"</span>.join(wrap(caption, <span class="hljs-number">12</span>))
plt.title(caption)
plt.imshow(images[i])
plt.axis(<span class="hljs-string">"off"</span>)
sample_images_to_visualize = [np.array(train_ds[i][<span class="hljs-string">"image"</span>]) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">5</span>)]
sample_captions = [train_ds[i][<span class="hljs-string">"text"</span>] <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">5</span>)]
plot_images(sample_images_to_visualize, sample_captions)</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_training_images_image_cap.png" alt="Sample training images"></div> <h2 class="relative group"><a id="preprocess-the-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess-the-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Preprocess the dataset</span></h2> <p>Since the dataset has two modalities (image and text), the pre-processing pipeline will preprocess images and the captions.</p> <p>To do so, load the processor class associated with the model you are about to fine-tune.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor
checkpoint = <span class="hljs-string">"microsoft/git-base"</span>
processor = AutoProcessor.from_pretrained(checkpoint)</pre></div> <p>The processor will internally pre-process the image (which includes resizing, and pixel scaling) and tokenize the caption.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">def</span> <span class="hljs-title function_">transforms</span>(<span class="hljs-params">example_batch</span>):
images = [x <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"image"</span>]]
captions = [x <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"text"</span>]]
inputs = processor(images=images, text=captions, padding=<span class="hljs-string">"max_length"</span>)
inputs.update({<span class="hljs-string">"labels"</span>: inputs[<span class="hljs-string">"input_ids"</span>]})
<span class="hljs-keyword">return</span> inputs
train_ds.set_transform(transforms)
test_ds.set_transform(transforms)</pre></div> <p>With the dataset ready, you can now set up the model for fine-tuning.</p> <h2 class="relative group"><a id="load-a-base-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-a-base-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load a base model</span></h2> <p>Load the <a href="https://huggingface.co/microsoft/git-base" rel="nofollow">“microsoft/git-base”</a> into a <a href="https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM" rel="nofollow"><code>AutoModelForCausalLM</code></a> object.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(checkpoint)</pre></div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Evaluate</span></h2> <p>Image captioning models are typically evaluated with the <a href="https://huggingface.co/spaces/evaluate-metric/rouge" rel="nofollow">Rouge Score</a> or <a href="https://huggingface.co/spaces/evaluate-metric/wer" rel="nofollow">Word Error Rate</a>. For this guide, you will use the Word Error Rate (WER).</p> <p>We use the 🤗 Evaluate library to do so. For potential limitations and other gotchas of the WER, refer to <a href="https://huggingface.co/spaces/evaluate-metric/wer" rel="nofollow">this guide</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">from</span> evaluate <span class="hljs-keyword">import</span> load
<span class="hljs-keyword">import</span> torch
wer = load(<span class="hljs-string">"wer"</span>)
<span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>):
logits, labels = eval_pred
predicted = logits.argmax(-<span class="hljs-number">1</span>)
decoded_labels = processor.batch_decode(labels, skip_special_tokens=<span class="hljs-literal">True</span>)
decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=<span class="hljs-literal">True</span>)
wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels)
<span class="hljs-keyword">return</span> {<span class="hljs-string">"wer_score"</span>: wer_score}</pre></div> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Train!</span></h2> <p>Now, you are ready to start fine-tuning the model. You will use the 🤗 <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> for this.</p> <p>First, define the training arguments using <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrainingArguments, Trainer
model_name = checkpoint.split(<span class="hljs-string">"/"</span>)[<span class="hljs-number">1</span>]
training_args = TrainingArguments(
output_dir=<span class="hljs-string">f"<span class="hljs-subst">{model_name}</span>-pokemon"</span>,
learning_rate=<span class="hljs-number">5e-5</span>,
num_train_epochs=<span class="hljs-number">50</span>,
fp16=<span class="hljs-literal">True</span>,
per_device_train_batch_size=<span class="hljs-number">32</span>,
per_device_eval_batch_size=<span class="hljs-number">32</span>,
gradient_accumulation_steps=<span class="hljs-number">2</span>,
save_total_limit=<span class="hljs-number">3</span>,
evaluation_strategy=<span class="hljs-string">"steps"</span>,
eval_steps=<span class="hljs-number">50</span>,
save_strategy=<span class="hljs-string">"steps"</span>,
save_steps=<span class="hljs-number">50</span>,
logging_steps=<span class="hljs-number">50</span>,
remove_unused_columns=<span class="hljs-literal">False</span>,
push_to_hub=<span class="hljs-literal">True</span>,
label_names=[<span class="hljs-string">"labels"</span>],
load_best_model_at_end=<span class="hljs-literal">True</span>,
)</pre></div> <p>Then pass them along with the datasets and the model to 🤗 Trainer.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)</pre></div> <p>To start training, simply call <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> on the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> object.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>trainer.train()</pre></div> <p>You should see the training loss drop smoothly as training progresses.</p> <p>Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.30.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>trainer.push_to_hub()</pre></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Inference</span></h2> <p>Take a sample image from <code>test_ds</code> to test the model.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-keyword">import</span> requests
url = <span class="hljs-string">"https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png"</span>
image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw)
image</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/test_image_image_cap.png" alt="Test image"></div>
Prepare image for the model.
<div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>device = <span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span>
inputs = processor(images=image, return_tensors=<span class="hljs-string">"pt"</span>).to(device)
pixel_values = inputs.pixel_values</pre></div> <p>Call <code>generate</code> and decode the predictions.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>generated_ids = model.generate(pixel_values=pixel_values, max_length=<span class="hljs-number">50</span>)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>]
<span class="hljs-built_in">print</span>(generated_caption)</pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>a drawing of a pink and blue pokemon</pre></div> <p>Looks like the fine-tuned model generated a pretty good caption!</p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/tasks/monocular_depth_estimation" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Depth estimation</a>
<a href="/docs/transformers/tasks/document_question_answering" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Document Question Answering<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Image captioning","isExpanded":true,"id":"image-captioning","url":"#image-captioning","sections":[{"title":"Load the Pokémon BLIP captions dataset","isExpanded":true,"id":"load-the-pokmon-blip-captions-dataset","url":"#load-the-pokmon-blip-captions-dataset"},{"title":"Preprocess the dataset","isExpanded":true,"id":"preprocess-the-dataset","url":"#preprocess-the-dataset"},{"title":"Load a base model","isExpanded":true,"id":"load-a-base-model","url":"#load-a-base-model"},{"title":"Evaluate","isExpanded":true,"id":"evaluate","url":"#evaluate"},{"title":"Train!","isExpanded":true,"id":"train","url":"#train"},{"title":"Inference","isExpanded":true,"id":"inference","url":"#inference"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#image-captioning" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-image-captioning"><wbr>Image captioning</a> <a href="#load-the-pokmon-blip-captions-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-the-pokmon-blip-captions-dataset"><wbr>Load the <wbr>Pokémon BLI<wbr>P captions dataset</a> <a href="#preprocess-the-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess-the-dataset"><wbr>Preprocess the dataset</a> <a href="#load-a-base-model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-a-base-model"><wbr>Load a base model</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train!</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/tasks/image_captioning" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/tasks/image_captioning");
}
</script>
<iframe name="__privateStripeMetricsController7090" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Ftasks%2Fimage_captioning&title=Image%20captioning&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:03.225Z |
Zero-shot object detection | https://huggingface.co/docs/transformers/tasks/zero_shot_object_detection | Traditionally, models used for [object detection](object_detection) require labeled image datasets for training, and are limited to detecting the set of classes from the training data.
Zero-shot object detection is supported by the [OWL-ViT](../model_doc/owlvit) model which uses a different approach. OWL-ViT is an open-vocabulary object detector. It means that it can detect objects in images based on free-text queries without the need to fine-tune the model on labeled datasets.
OWL-ViT leverages multi-modal representations to perform open-vocabulary detection. It combines [CLIP](../model_doc/clip) with lightweight object classification and localization heads. Open-vocabulary detection is achieved by embedding free-text queries with the text encoder of CLIP and using them as input to the object classification and localization heads. associate images and their corresponding textual descriptions, and ViT processes image patches as inputs. The authors of OWL-ViT first trained CLIP from scratch and then fine-tuned OWL-ViT end to end on standard object detection datasets using a bipartite matching loss.
With this approach, the model can detect objects based on textual descriptions without prior training on labeled datasets.
In this guide, you will learn how to use OWL-ViT:
- to detect objects based on text prompts
- for batch object detection
- for image-guided object detection
Before you begin, make sure you have all the necessary libraries installed:
```
pip install -q transformers```
## [](#zeroshot-object-detection-pipeline)Zero-shot object detection pipeline
The simplest way to try out inference with OWL-ViT is to use it in a [pipeline()](/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a pipeline for zero-shot object detection from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?other=owlvit):
```
>>> from transformers import pipeline
>>> checkpoint = "google/owlvit-base-patch32"
>>> detector = pipeline(model=checkpoint, task="zero-shot-object-detection")```
Next, choose an image you’d like to detect objects in. Here we’ll use the image of astronaut Eileen Collins that is a part of the [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images dataset.
```
>>> import skimage
>>> import numpy as np
>>> from PIL import Image
>>> image = skimage.data.astronaut()
>>> image = Image.fromarray(np.uint8(image)).convert("RGB")
>>> image```
![Astronaut Eileen Collins](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png)
Pass the image and the candidate object labels to look for to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. We also pass text descriptions for all items we want to query the image for.
```
>>> predictions = detector(
... image,
... candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"],
... )
>>> predictions
[{'score': 0.3571370542049408,
'label': 'human face',
'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}},
{'score': 0.28099656105041504,
'label': 'nasa badge',
'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}},
{'score': 0.2110239565372467,
'label': 'rocket',
'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}},
{'score': 0.13790413737297058,
'label': 'star-spangled banner',
'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}},
{'score': 0.11950037628412247,
'label': 'nasa badge',
'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}},
{'score': 0.10649408400058746,
'label': 'rocket',
'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}]```
Let’s visualize the predictions:
```
>>> from PIL import ImageDraw
>>> draw = ImageDraw.Draw(image)
>>> for prediction in predictions:
... box = prediction["box"]
... label = prediction["label"]
... score = prediction["score"]
... xmin, ymin, xmax, ymax = box.values()
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white")
>>> image```
![Visualized predictions on NASA image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png)
## [](#textprompted-zeroshot-object-detection-by-hand)Text-prompted zero-shot object detection by hand
Now that you’ve seen how to use the zero-shot object detection pipeline, let’s replicate the same result manually.
Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?other=owlvit). Here we’ll use the same checkpoint as before:
```
>>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint)
>>> processor = AutoProcessor.from_pretrained(checkpoint)```
Let’s take a different image to switch things up.
```
>>> import requests
>>> url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640"
>>> im = Image.open(requests.get(url, stream=True).raw)
>>> im```
![Beach photo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png)
Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a [CLIPTokenizer](/docs/transformers/v4.30.0/en/model_doc/clip#transformers.CLIPTokenizer) that takes care of the text inputs.
```
>>> text_queries = ["hat", "book", "sunglasses", "camera"]
>>> inputs = processor(text=text_queries, images=im, return_tensors="pt")```
Pass the inputs through the model, post-process, and visualize the results. Since the image processor resized images before feeding them to the model, you need to use the [post\_process\_object\_detection()](/docs/transformers/v4.30.0/en/model_doc/owlvit#transformers.OwlViTImageProcessor.post_process_object_detection) method to make sure the predicted bounding boxes have the correct coordinates relative to the original image:
```
>>> import torch
>>> with torch.no_grad():
... outputs = model(**inputs)
... target_sizes = torch.tensor([im.size[::-1]])
... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0]
>>> draw = ImageDraw.Draw(im)
>>> scores = results["scores"].tolist()
>>> labels = results["labels"].tolist()
>>> boxes = results["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white")
>>> im```
![Beach photo with detected objects](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png)
## [](#batch-processing)Batch processing
You can pass multiple sets of images and text queries to search for different (or same) objects in several images. Let’s use both an astronaut image and the beach image together. For batch processing, you should pass text queries as a nested list to the processor and images as lists of PIL images, PyTorch tensors, or NumPy arrays.
```
>>> images = [image, im]
>>> text_queries = [
... ["human face", "rocket", "nasa badge", "star-spangled banner"],
... ["hat", "book", "sunglasses", "camera"],
... ]
>>> inputs = processor(text=text_queries, images=images, return_tensors="pt")```
Previously for post-processing you passed the single image’s size as a tensor, but you can also pass a tuple, or, in case of several images, a list of tuples. Let’s create predictions for the two examples, and visualize the second one (`image_idx = 1`).
```
>>> with torch.no_grad():
... outputs = model(**inputs)
... target_sizes = [x.size[::-1] for x in images]
... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)
>>> image_idx = 1
>>> draw = ImageDraw.Draw(images[image_idx])
>>> scores = results[image_idx]["scores"].tolist()
>>> labels = results[image_idx]["labels"].tolist()
>>> boxes = results[image_idx]["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white")
>>> images[image_idx]```
![Beach photo with detected objects](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png)
## [](#imageguided-object-detection)Image-guided object detection
In addition to zero-shot object detection with text queries, OWL-ViT offers image-guided object detection. This means you can use an image query to find similar objects in the target image. Unlike text queries, only a single example image is allowed.
Let’s take an image with two cats on a couch as a target image, and an image of a single cat as a query:
```
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image_target = Image.open(requests.get(url, stream=True).raw)
>>> query_url = "http://images.cocodataset.org/val2017/000000524280.jpg"
>>> query_image = Image.open(requests.get(query_url, stream=True).raw)```
Let’s take a quick look at the images:
```
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots(1, 2)
>>> ax[0].imshow(image_target)
>>> ax[1].imshow(query_image)```
![Cats](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png)
In the preprocessing step, instead of text queries, you now need to use `query_images`:
```
>>> inputs = processor(images=image_target, query_images=query_image, return_tensors="pt")```
For predictions, instead of passing the inputs to the model, pass them to [image\_guided\_detection()](/docs/transformers/v4.30.0/en/model_doc/owlvit#transformers.OwlViTForObjectDetection.image_guided_detection). Draw the predictions as before except now there are no labels.
```
>>> with torch.no_grad():
... outputs = model.image_guided_detection(**inputs)
... target_sizes = torch.tensor([image_target.size[::-1]])
... results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0]
>>> draw = ImageDraw.Draw(image_target)
>>> scores = results["scores"].tolist()
>>> boxes = results["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4)
>>> image_target```
![Cats with bounding boxes](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png)
If you’d like to interactively try out inference with OWL-ViT, check out this demo: | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Zero-shot object detection">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/tasks/zero_shot_object_detection">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Zero-shot object detection</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"zeroshot-object-detection","sections":[{"local":"zeroshot-object-detection-pipeline","title":"Zero-shot object detection pipeline"},{"local":"textprompted-zeroshot-object-detection-by-hand","title":"Text-prompted zero-shot object detection by hand"},{"local":"batch-processing","title":"Batch processing"},{"local":"imageguided-object-detection","title":"Image-guided object detection"}],"title":"Zero-shot object detection"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":true,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","isExpanded":true,"id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"tasks/zero_shot_object_detection","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Zero-shot object detection"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Zero-shot object detection</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/image_classification">Image classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/semantic_segmentation">Semantic segmentation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/video_classification">Video classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/object_detection">Object detection </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/tasks/zero_shot_object_detection">Zero-shot object detection </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/zero_shot_image_classification">Zero-shot image classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="zeroshot-object-detection" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-object-detection"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Zero-shot object detection</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p>Traditionally, models used for <a href="object_detection">object detection</a> require labeled image datasets for training,
and are limited to detecting the set of classes from the training data.</p> <p>Zero-shot object detection is supported by the <a href="../model_doc/owlvit">OWL-ViT</a> model which uses a different approach. OWL-ViT
is an open-vocabulary object detector. It means that it can detect objects in images based on free-text queries without
the need to fine-tune the model on labeled datasets.</p> <p>OWL-ViT leverages multi-modal representations to perform open-vocabulary detection. It combines <a href="../model_doc/clip">CLIP</a> with
lightweight object classification and localization heads. Open-vocabulary detection is achieved by embedding free-text queries with the text encoder of CLIP and using them as input to the object classification and localization heads.
associate images and their corresponding textual descriptions, and ViT processes image patches as inputs. The authors
of OWL-ViT first trained CLIP from scratch and then fine-tuned OWL-ViT end to end on standard object detection datasets using
a bipartite matching loss.</p> <p>With this approach, the model can detect objects based on textual descriptions without prior training on labeled datasets.</p> <p>In this guide, you will learn how to use OWL-ViT:</p> <ul><li>to detect objects based on text prompts</li> <li>for batch object detection</li> <li>for image-guided object detection</li></ul> <p>Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install -q transformers</pre></div> <h2 class="relative group"><a id="zeroshot-object-detection-pipeline" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-object-detection-pipeline"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Zero-shot object detection pipeline</span></h2> <p>The simplest way to try out inference with OWL-ViT is to use it in a <a href="/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a pipeline
for zero-shot object detection from a <a href="https://huggingface.co/models?other=owlvit" rel="nofollow">checkpoint on the Hugging Face Hub</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline
<span class="hljs-meta">>>> </span>checkpoint = <span class="hljs-string">"google/owlvit-base-patch32"</span>
<span class="hljs-meta">>>> </span>detector = pipeline(model=checkpoint, task=<span class="hljs-string">"zero-shot-object-detection"</span>)</pre></div> <p>Next, choose an image you’d like to detect objects in. Here we’ll use the image of astronaut Eileen Collins that is
a part of the <a href="https://www.nasa.gov/multimedia/imagegallery/index.html" rel="nofollow">NASA</a> Great Images dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> skimage
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span>image = skimage.data.astronaut()
<span class="hljs-meta">>>> </span>image = Image.fromarray(np.uint8(image)).convert(<span class="hljs-string">"RGB"</span>)
<span class="hljs-meta">>>> </span>image</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png" alt="Astronaut Eileen Collins"></div> <p>Pass the image and the candidate object labels to look for to the pipeline.
Here we pass the image directly; other suitable options include a local path to an image or an image url. We also pass text descriptions for all items we want to query the image for.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>predictions = detector(
<span class="hljs-meta">... </span> image,
<span class="hljs-meta">... </span> candidate_labels=[<span class="hljs-string">"human face"</span>, <span class="hljs-string">"rocket"</span>, <span class="hljs-string">"nasa badge"</span>, <span class="hljs-string">"star-spangled banner"</span>],
<span class="hljs-meta">... </span>)
<span class="hljs-meta">>>> </span>predictions
[{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.3571370542049408</span>,
<span class="hljs-string">'label'</span>: <span class="hljs-string">'human face'</span>,
<span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">180</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">71</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">271</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">178</span>}},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.28099656105041504</span>,
<span class="hljs-string">'label'</span>: <span class="hljs-string">'nasa badge'</span>,
<span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">129</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">348</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">206</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">427</span>}},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.2110239565372467</span>,
<span class="hljs-string">'label'</span>: <span class="hljs-string">'rocket'</span>,
<span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">350</span>, <span class="hljs-string">'ymin'</span>: -<span class="hljs-number">1</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">468</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">288</span>}},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.13790413737297058</span>,
<span class="hljs-string">'label'</span>: <span class="hljs-string">'star-spangled banner'</span>,
<span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">1</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">1</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">105</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">509</span>}},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.11950037628412247</span>,
<span class="hljs-string">'label'</span>: <span class="hljs-string">'nasa badge'</span>,
<span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">277</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">338</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">327</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">380</span>}},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.10649408400058746</span>,
<span class="hljs-string">'label'</span>: <span class="hljs-string">'rocket'</span>,
<span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">358</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">64</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">424</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">280</span>}}]</pre></div> <p>Let’s visualize the predictions:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> ImageDraw
<span class="hljs-meta">>>> </span>draw = ImageDraw.Draw(image)
<span class="hljs-meta">>>> </span><span class="hljs-keyword">for</span> prediction <span class="hljs-keyword">in</span> predictions:
<span class="hljs-meta">... </span> box = prediction[<span class="hljs-string">"box"</span>]
<span class="hljs-meta">... </span> label = prediction[<span class="hljs-string">"label"</span>]
<span class="hljs-meta">... </span> score = prediction[<span class="hljs-string">"score"</span>]
<span class="hljs-meta">... </span> xmin, ymin, xmax, ymax = box.values()
<span class="hljs-meta">... </span> draw.rectangle((xmin, ymin, xmax, ymax), outline=<span class="hljs-string">"red"</span>, width=<span class="hljs-number">1</span>)
<span class="hljs-meta">... </span> draw.text((xmin, ymin), <span class="hljs-string">f"<span class="hljs-subst">{label}</span>: <span class="hljs-subst">{<span class="hljs-built_in">round</span>(score,<span class="hljs-number">2</span>)}</span>"</span>, fill=<span class="hljs-string">"white"</span>)
<span class="hljs-meta">>>> </span>image</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png" alt="Visualized predictions on NASA image"></div> <h2 class="relative group"><a id="textprompted-zeroshot-object-detection-by-hand" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#textprompted-zeroshot-object-detection-by-hand"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Text-prompted zero-shot object detection by hand</span></h2> <p>Now that you’ve seen how to use the zero-shot object detection pipeline, let’s replicate the same
result manually.</p> <p>Start by loading the model and associated processor from a <a href="https://huggingface.co/models?other=owlvit" rel="nofollow">checkpoint on the Hugging Face Hub</a>.
Here we’ll use the same checkpoint as before:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, AutoModelForZeroShotObjectDetection
<span class="hljs-meta">>>> </span>model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint)
<span class="hljs-meta">>>> </span>processor = AutoProcessor.from_pretrained(checkpoint)</pre></div> <p>Let’s take a different image to switch things up.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span>url = <span class="hljs-string">"https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640"</span>
<span class="hljs-meta">>>> </span>im = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw)
<span class="hljs-meta">>>> </span>im</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png" alt="Beach photo"></div> <p>Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the
image for the model by resizing and normalizing it, and a <a href="/docs/transformers/v4.30.0/en/model_doc/clip#transformers.CLIPTokenizer">CLIPTokenizer</a> that takes care of the text inputs.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>text_queries = [<span class="hljs-string">"hat"</span>, <span class="hljs-string">"book"</span>, <span class="hljs-string">"sunglasses"</span>, <span class="hljs-string">"camera"</span>]
<span class="hljs-meta">>>> </span>inputs = processor(text=text_queries, images=im, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p>Pass the inputs through the model, post-process, and visualize the results. Since the image processor resized images before
feeding them to the model, you need to use the <a href="/docs/transformers/v4.30.0/en/model_doc/owlvit#transformers.OwlViTImageProcessor.post_process_object_detection">post_process_object_detection()</a> method to make sure the predicted bounding
boxes have the correct coordinates relative to the original image:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> outputs = model(**inputs)
<span class="hljs-meta">... </span> target_sizes = torch.tensor([im.size[::-<span class="hljs-number">1</span>]])
<span class="hljs-meta">... </span> results = processor.post_process_object_detection(outputs, threshold=<span class="hljs-number">0.1</span>, target_sizes=target_sizes)[<span class="hljs-number">0</span>]
<span class="hljs-meta">>>> </span>draw = ImageDraw.Draw(im)
<span class="hljs-meta">>>> </span>scores = results[<span class="hljs-string">"scores"</span>].tolist()
<span class="hljs-meta">>>> </span>labels = results[<span class="hljs-string">"labels"</span>].tolist()
<span class="hljs-meta">>>> </span>boxes = results[<span class="hljs-string">"boxes"</span>].tolist()
<span class="hljs-meta">>>> </span><span class="hljs-keyword">for</span> box, score, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(boxes, scores, labels):
<span class="hljs-meta">... </span> xmin, ymin, xmax, ymax = box
<span class="hljs-meta">... </span> draw.rectangle((xmin, ymin, xmax, ymax), outline=<span class="hljs-string">"red"</span>, width=<span class="hljs-number">1</span>)
<span class="hljs-meta">... </span> draw.text((xmin, ymin), <span class="hljs-string">f"<span class="hljs-subst">{text_queries[label]}</span>: <span class="hljs-subst">{<span class="hljs-built_in">round</span>(score,<span class="hljs-number">2</span>)}</span>"</span>, fill=<span class="hljs-string">"white"</span>)
<span class="hljs-meta">>>> </span>im</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"></div> <h2 class="relative group"><a id="batch-processing" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#batch-processing"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Batch processing</span></h2> <p>You can pass multiple sets of images and text queries to search for different (or same) objects in several images.
Let’s use both an astronaut image and the beach image together.
For batch processing, you should pass text queries as a nested list to the processor and images as lists of PIL images,
PyTorch tensors, or NumPy arrays.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>images = [image, im]
<span class="hljs-meta">>>> </span>text_queries = [
<span class="hljs-meta">... </span> [<span class="hljs-string">"human face"</span>, <span class="hljs-string">"rocket"</span>, <span class="hljs-string">"nasa badge"</span>, <span class="hljs-string">"star-spangled banner"</span>],
<span class="hljs-meta">... </span> [<span class="hljs-string">"hat"</span>, <span class="hljs-string">"book"</span>, <span class="hljs-string">"sunglasses"</span>, <span class="hljs-string">"camera"</span>],
<span class="hljs-meta">... </span>]
<span class="hljs-meta">>>> </span>inputs = processor(text=text_queries, images=images, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p>Previously for post-processing you passed the single image’s size as a tensor, but you can also pass a tuple, or, in case
of several images, a list of tuples. Let’s create predictions for the two examples, and visualize the second one (<code>image_idx = 1</code>).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> outputs = model(**inputs)
<span class="hljs-meta">... </span> target_sizes = [x.size[::-<span class="hljs-number">1</span>] <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> images]
<span class="hljs-meta">... </span> results = processor.post_process_object_detection(outputs, threshold=<span class="hljs-number">0.1</span>, target_sizes=target_sizes)
<span class="hljs-meta">>>> </span>image_idx = <span class="hljs-number">1</span>
<span class="hljs-meta">>>> </span>draw = ImageDraw.Draw(images[image_idx])
<span class="hljs-meta">>>> </span>scores = results[image_idx][<span class="hljs-string">"scores"</span>].tolist()
<span class="hljs-meta">>>> </span>labels = results[image_idx][<span class="hljs-string">"labels"</span>].tolist()
<span class="hljs-meta">>>> </span>boxes = results[image_idx][<span class="hljs-string">"boxes"</span>].tolist()
<span class="hljs-meta">>>> </span><span class="hljs-keyword">for</span> box, score, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(boxes, scores, labels):
<span class="hljs-meta">... </span> xmin, ymin, xmax, ymax = box
<span class="hljs-meta">... </span> draw.rectangle((xmin, ymin, xmax, ymax), outline=<span class="hljs-string">"red"</span>, width=<span class="hljs-number">1</span>)
<span class="hljs-meta">... </span> draw.text((xmin, ymin), <span class="hljs-string">f"<span class="hljs-subst">{text_queries[image_idx][label]}</span>: <span class="hljs-subst">{<span class="hljs-built_in">round</span>(score,<span class="hljs-number">2</span>)}</span>"</span>, fill=<span class="hljs-string">"white"</span>)
<span class="hljs-meta">>>> </span>images[image_idx]</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"></div> <h2 class="relative group"><a id="imageguided-object-detection" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#imageguided-object-detection"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Image-guided object detection</span></h2> <p>In addition to zero-shot object detection with text queries, OWL-ViT offers image-guided object detection. This means
you can use an image query to find similar objects in the target image.
Unlike text queries, only a single example image is allowed.</p> <p>Let’s take an image with two cats on a couch as a target image, and an image of a single cat
as a query:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>
<span class="hljs-meta">>>> </span>image_target = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw)
<span class="hljs-meta">>>> </span>query_url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000524280.jpg"</span>
<span class="hljs-meta">>>> </span>query_image = Image.<span class="hljs-built_in">open</span>(requests.get(query_url, stream=<span class="hljs-literal">True</span>).raw)</pre></div> <p>Let’s take a quick look at the images:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt
<span class="hljs-meta">>>> </span>fig, ax = plt.subplots(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>)
<span class="hljs-meta">>>> </span>ax[<span class="hljs-number">0</span>].imshow(image_target)
<span class="hljs-meta">>>> </span>ax[<span class="hljs-number">1</span>].imshow(query_image)</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png" alt="Cats"></div> <p>In the preprocessing step, instead of text queries, you now need to use <code>query_images</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>inputs = processor(images=image_target, query_images=query_image, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p>For predictions, instead of passing the inputs to the model, pass them to <a href="/docs/transformers/v4.30.0/en/model_doc/owlvit#transformers.OwlViTForObjectDetection.image_guided_detection">image_guided_detection()</a>. Draw the predictions
as before except now there are no labels.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> outputs = model.image_guided_detection(**inputs)
<span class="hljs-meta">... </span> target_sizes = torch.tensor([image_target.size[::-<span class="hljs-number">1</span>]])
<span class="hljs-meta">... </span> results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[<span class="hljs-number">0</span>]
<span class="hljs-meta">>>> </span>draw = ImageDraw.Draw(image_target)
<span class="hljs-meta">>>> </span>scores = results[<span class="hljs-string">"scores"</span>].tolist()
<span class="hljs-meta">>>> </span>boxes = results[<span class="hljs-string">"boxes"</span>].tolist()
<span class="hljs-meta">>>> </span><span class="hljs-keyword">for</span> box, score, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(boxes, scores, labels):
<span class="hljs-meta">... </span> xmin, ymin, xmax, ymax = box
<span class="hljs-meta">... </span> draw.rectangle((xmin, ymin, xmax, ymax), outline=<span class="hljs-string">"white"</span>, width=<span class="hljs-number">4</span>)
<span class="hljs-meta">>>> </span>image_target</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png" alt="Cats with bounding boxes"></div> <p>If you’d like to interactively try out inference with OWL-ViT, check out this demo:</p> <iframe src="https://adirik-owl-vit.hf.space" frameborder="0" width="850" height="450"></iframe> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/tasks/object_detection" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Object detection</a>
<a href="/docs/transformers/tasks/zero_shot_image_classification" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Zero-shot image classification<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Zero-shot object detection","isExpanded":true,"id":"zeroshot-object-detection","url":"#zeroshot-object-detection","sections":[{"title":"Zero-shot object detection pipeline","isExpanded":true,"id":"zeroshot-object-detection-pipeline","url":"#zeroshot-object-detection-pipeline"},{"title":"Text-prompted zero-shot object detection by hand","isExpanded":true,"id":"textprompted-zeroshot-object-detection-by-hand","url":"#textprompted-zeroshot-object-detection-by-hand"},{"title":"Batch processing","isExpanded":true,"id":"batch-processing","url":"#batch-processing"},{"title":"Image-guided object detection","isExpanded":true,"id":"imageguided-object-detection","url":"#imageguided-object-detection"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#zeroshot-object-detection" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-object-detection"><wbr>Zero-shot object detection</a> <a href="#zeroshot-object-detection-pipeline" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-object-detection-pipeline"><wbr>Zero-shot object detection pipeline</a> <a href="#textprompted-zeroshot-object-detection-by-hand" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-textprompted-zeroshot-object-detection-by-hand"><wbr>Text-prompted zero-shot object detection by hand</a> <a href="#batch-processing" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-batch-processing"><wbr>Batch processing</a> <a href="#imageguided-object-detection" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-imageguided-object-detection"><wbr>Image-guided object detection</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/tasks/zero_shot_object_detection" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/tasks/zero_shot_object_detection");
}
</script>
<iframe name="__privateStripeMetricsController0020" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Ftasks%2Fzero_shot_object_detection&title=Zero-shot%20object%20detection&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:03.526Z |
Zero-shot image classification | https://huggingface.co/docs/transformers/tasks/zero_shot_image_classification | Zero-shot image classification is a task that involves classifying images into different categories using a model that was not explicitly trained on data containing labeled examples from those specific categories.
Traditionally, image classification requires training a model on a specific set of labeled images, and this model learns to “map” certain image features to labels. When there’s a need to use such model for a classification task that introduces a new set of labels, fine-tuning is required to “recalibrate” the model.
In contrast, zero-shot or open vocabulary image classification models are typically multi-modal models that have been trained on a large dataset of images and associated descriptions. These models learn aligned vision-language representations that can be used for many downstream tasks including zero-shot image classification.
This is a more flexible approach to image classification that allows models to generalize to new and unseen categories without the need for additional training data and enables users to query images with free-form text descriptions of their target objects .
In this guide you’ll learn how to:
- create a zero-shot image classification pipeline
- run zero-shot image classification inference by hand
Before you begin, make sure you have all the necessary libraries installed:
```
pip install -q transformers```
## [](#zeroshot-image-classification-pipeline)Zero-shot image classification pipeline
The simplest way to try out inference with a model supporting zero-shot image classification is to use the corresponding [pipeline()](/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a pipeline from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads):
```
>>> from transformers import pipeline
>>> checkpoint = "openai/clip-vit-large-patch14"
>>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification")```
Next, choose an image you’d like to classify.
```
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image```
![Photo of an owl](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg)
Pass the image and the candidate object labels to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. The candidate labels can be simple words like in this example, or more descriptive.
```
>>> predictions = classifier(image, candidate_labels=["fox", "bear", "seagull", "owl"])
>>> predictions
[{'score': 0.9996670484542847, 'label': 'owl'},
{'score': 0.000199399160919711, 'label': 'seagull'},
{'score': 7.392891711788252e-05, 'label': 'fox'},
{'score': 5.96074532950297e-05, 'label': 'bear'}]```
## [](#zeroshot-image-classification-by-hand)Zero-shot image classification by hand
Now that you’ve seen how to use the zero-shot image classification pipeline, let’s take a look how you can run zero-shot image classification manually.
Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads). Here we’ll use the same checkpoint as before:
```
>>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification
>>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint)
>>> processor = AutoProcessor.from_pretrained(checkpoint)```
Let’s take a different image to switch things up.
```
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image```
![Photo of a car](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg)
Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a tokenizer that takes care of the text inputs.
```
>>> candidate_labels = ["tree", "car", "bike", "cat"]
>>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True)```
Pass the inputs through the model, and post-process the results:
```
>>> import torch
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> logits = outputs.logits_per_image[0]
>>> probs = logits.softmax(dim=-1).numpy()
>>> scores = probs.tolist()
>>> result = [
... {"score": score, "label": candidate_label}
... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0])
... ]
>>> result
[{'score': 0.998572, 'label': 'car'},
{'score': 0.0010570387, 'label': 'bike'},
{'score': 0.0003393686, 'label': 'tree'},
{'score': 3.1572064e-05, 'label': 'cat'}]``` | <!DOCTYPE html><html class=""><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science.">
<meta property="fb:app_id" content="1321688464574422">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@huggingface">
<meta property="og:title" content="Zero-shot image classification">
<meta property="og:type" content="website">
<meta property="og:url" content="https://huggingface.co/docs/transformers/tasks/zero_shot_image_classification">
<meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png">
<link rel="stylesheet" href="/front/build/kube-c0d76de/style.css">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css">
<title>Zero-shot image classification</title>
<script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script>
<script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.30.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.30.0/en/_app/error.svelte-hf-doc-builder.js"><meta name="hf:doc:metadata" content="{"local":"zeroshot-image-classification","sections":[{"local":"zeroshot-image-classification-pipeline","title":"Zero-shot image classification pipeline"},{"local":"zeroshot-image-classification-by-hand","title":"Zero-shot image classification by hand"}],"title":"Zero-shot image classification"}"></head>
<body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage">
<div class="flex min-h-screen flex-col">
<div class="SVELTE_HYDRATER contents" data-props="{"isWide":true,"isZh":false}" data-target="MainHeader"><header class="border-b border-gray-100"><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <button class="relative flex w-8 flex-none items-center justify-center place-self-stretch lg:hidden" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="btn ml-2" href="/join">Sign Up</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div>
<div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div>
<main class="flex flex-1 flex-col "><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapters":[{"title":"Get started","isExpanded":true,"sections":[{"title":"🤗 Transformers","isExpanded":true,"id":"index","url":"/docs/transformers/index"},{"title":"Quick tour","isExpanded":true,"id":"quicktour","url":"/docs/transformers/quicktour"},{"title":"Installation","isExpanded":true,"id":"installation","url":"/docs/transformers/installation"}]},{"title":"Tutorials","isExpanded":true,"sections":[{"title":"Run inference with pipelines","isExpanded":true,"id":"pipeline_tutorial","url":"/docs/transformers/pipeline_tutorial"},{"title":"Write portable code with AutoClass","isExpanded":true,"id":"autoclass_tutorial","url":"/docs/transformers/autoclass_tutorial"},{"title":"Preprocess data","isExpanded":true,"id":"preprocessing","url":"/docs/transformers/preprocessing"},{"title":"Fine-tune a pretrained model","isExpanded":true,"id":"training","url":"/docs/transformers/training"},{"title":"Train with a script","isExpanded":true,"id":"run_scripts","url":"/docs/transformers/run_scripts"},{"title":"Set up distributed training with 🤗 Accelerate","isExpanded":true,"id":"accelerate","url":"/docs/transformers/accelerate"},{"title":"Share your model","isExpanded":true,"id":"model_sharing","url":"/docs/transformers/model_sharing"},{"title":"Agents","isExpanded":true,"id":"transformers_agents","url":"/docs/transformers/transformers_agents"}]},{"title":"Task Guides","isExpanded":true,"sections":[{"title":"Natural Language Processing","isExpanded":false,"sections":[{"title":"Text classification","id":"tasks/sequence_classification","url":"/docs/transformers/tasks/sequence_classification"},{"title":"Token classification","id":"tasks/token_classification","url":"/docs/transformers/tasks/token_classification"},{"title":"Question answering","id":"tasks/question_answering","url":"/docs/transformers/tasks/question_answering"},{"title":"Causal language modeling","id":"tasks/language_modeling","url":"/docs/transformers/tasks/language_modeling"},{"title":"Masked language modeling","id":"tasks/masked_language_modeling","url":"/docs/transformers/tasks/masked_language_modeling"},{"title":"Translation","id":"tasks/translation","url":"/docs/transformers/tasks/translation"},{"title":"Summarization","id":"tasks/summarization","url":"/docs/transformers/tasks/summarization"},{"title":"Multiple choice","id":"tasks/multiple_choice","url":"/docs/transformers/tasks/multiple_choice"}]},{"title":"Audio","isExpanded":false,"sections":[{"title":"Audio classification","id":"tasks/audio_classification","url":"/docs/transformers/tasks/audio_classification"},{"title":"Automatic speech recognition","id":"tasks/asr","url":"/docs/transformers/tasks/asr"}]},{"title":"Computer Vision","isExpanded":true,"sections":[{"title":"Image classification","id":"tasks/image_classification","url":"/docs/transformers/tasks/image_classification"},{"title":"Semantic segmentation","id":"tasks/semantic_segmentation","url":"/docs/transformers/tasks/semantic_segmentation"},{"title":"Video classification","id":"tasks/video_classification","url":"/docs/transformers/tasks/video_classification"},{"title":"Object detection","id":"tasks/object_detection","url":"/docs/transformers/tasks/object_detection"},{"title":"Zero-shot object detection","id":"tasks/zero_shot_object_detection","url":"/docs/transformers/tasks/zero_shot_object_detection"},{"title":"Zero-shot image classification","isExpanded":true,"id":"tasks/zero_shot_image_classification","url":"/docs/transformers/tasks/zero_shot_image_classification"},{"title":"Depth estimation","id":"tasks/monocular_depth_estimation","url":"/docs/transformers/tasks/monocular_depth_estimation"}]},{"title":"Multimodal","isExpanded":false,"sections":[{"title":"Image captioning","id":"tasks/image_captioning","url":"/docs/transformers/tasks/image_captioning"},{"title":"Document Question Answering","id":"tasks/document_question_answering","url":"/docs/transformers/tasks/document_question_answering"},{"title":"Text to speech","id":"tasks/text-to-speech","url":"/docs/transformers/tasks/text-to-speech"}]}]},{"title":"Developer guides","isExpanded":true,"sections":[{"title":"Use fast tokenizers from 🤗 Tokenizers","isExpanded":true,"id":"fast_tokenizers","url":"/docs/transformers/fast_tokenizers"},{"title":"Run inference with multilingual models","isExpanded":true,"id":"multilingual","url":"/docs/transformers/multilingual"},{"title":"Customize text generation strategy","isExpanded":true,"id":"generation_strategies","url":"/docs/transformers/generation_strategies"},{"title":"Use model-specific APIs","isExpanded":true,"id":"create_a_model","url":"/docs/transformers/create_a_model"},{"title":"Share a custom model","isExpanded":true,"id":"custom_models","url":"/docs/transformers/custom_models"},{"title":"Run training on Amazon SageMaker","isExpanded":true,"id":"sagemaker","url":"/docs/transformers/sagemaker"},{"title":"Export to ONNX","isExpanded":true,"id":"serialization","url":"/docs/transformers/serialization"},{"title":"Export to TFLite","isExpanded":true,"id":"tflite","url":"/docs/transformers/tflite"},{"title":"Export to TorchScript","isExpanded":true,"id":"torchscript","url":"/docs/transformers/torchscript"},{"title":"Benchmarks","isExpanded":true,"id":"benchmarks","url":"/docs/transformers/benchmarks"},{"title":"Notebooks with examples","isExpanded":true,"id":"notebooks","url":"/docs/transformers/notebooks"},{"title":"Community resources","isExpanded":true,"id":"community","url":"/docs/transformers/community"},{"title":"Custom Tools and Prompts","isExpanded":true,"id":"custom_tools","url":"/docs/transformers/custom_tools"},{"title":"Troubleshoot","isExpanded":true,"id":"troubleshooting","url":"/docs/transformers/troubleshooting"}]},{"title":"Performance and scalability","isExpanded":true,"sections":[{"title":"Overview","isExpanded":true,"id":"performance","url":"/docs/transformers/performance"},{"title":"Training on one GPU","isExpanded":true,"id":"perf_train_gpu_one","url":"/docs/transformers/perf_train_gpu_one"},{"title":"Training on many GPUs","isExpanded":true,"id":"perf_train_gpu_many","url":"/docs/transformers/perf_train_gpu_many"},{"title":"Training on CPU","isExpanded":true,"id":"perf_train_cpu","url":"/docs/transformers/perf_train_cpu"},{"title":"Training on many CPUs","isExpanded":true,"id":"perf_train_cpu_many","url":"/docs/transformers/perf_train_cpu_many"},{"title":"Training on TPUs","isExpanded":true,"id":"perf_train_tpu","url":"/docs/transformers/perf_train_tpu"},{"title":"Training on TPU with TensorFlow","isExpanded":true,"id":"perf_train_tpu_tf","url":"/docs/transformers/perf_train_tpu_tf"},{"title":"Training on Specialized Hardware","isExpanded":true,"id":"perf_train_special","url":"/docs/transformers/perf_train_special"},{"title":"Inference on CPU","isExpanded":true,"id":"perf_infer_cpu","url":"/docs/transformers/perf_infer_cpu"},{"title":"Inference on one GPU","isExpanded":true,"id":"perf_infer_gpu_one","url":"/docs/transformers/perf_infer_gpu_one"},{"title":"Inference on many GPUs","isExpanded":true,"id":"perf_infer_gpu_many","url":"/docs/transformers/perf_infer_gpu_many"},{"title":"Inference on Specialized Hardware","isExpanded":true,"id":"perf_infer_special","url":"/docs/transformers/perf_infer_special"},{"title":"Custom hardware for training","isExpanded":true,"id":"perf_hardware","url":"/docs/transformers/perf_hardware"},{"title":"Instantiating a big model","isExpanded":true,"id":"big_models","url":"/docs/transformers/big_models"},{"title":"Debugging","isExpanded":true,"id":"debugging","url":"/docs/transformers/debugging"},{"title":"Hyperparameter Search using Trainer API","isExpanded":true,"id":"hpo_train","url":"/docs/transformers/hpo_train"},{"title":"XLA Integration for TensorFlow Models","isExpanded":true,"id":"tf_xla","url":"/docs/transformers/tf_xla"}]},{"title":"Contribute","isExpanded":true,"sections":[{"title":"How to contribute to transformers?","isExpanded":true,"id":"contributing","url":"/docs/transformers/contributing"},{"title":"How to add a model to 🤗 Transformers?","isExpanded":true,"id":"add_new_model","url":"/docs/transformers/add_new_model"},{"title":"How to convert a 🤗 Transformers model to TensorFlow?","isExpanded":true,"id":"add_tensorflow_model","url":"/docs/transformers/add_tensorflow_model"},{"title":"How to add a pipeline to 🤗 Transformers?","isExpanded":true,"id":"add_new_pipeline","url":"/docs/transformers/add_new_pipeline"},{"title":"Testing","isExpanded":true,"id":"testing","url":"/docs/transformers/testing"},{"title":"Checks on a Pull Request","isExpanded":true,"id":"pr_checks","url":"/docs/transformers/pr_checks"}]},{"title":"Conceptual guides","isExpanded":true,"sections":[{"title":"Philosophy","isExpanded":true,"id":"philosophy","url":"/docs/transformers/philosophy"},{"title":"Glossary","isExpanded":true,"id":"glossary","url":"/docs/transformers/glossary"},{"title":"What 🤗 Transformers can do","isExpanded":true,"id":"task_summary","url":"/docs/transformers/task_summary"},{"title":"How 🤗 Transformers solve tasks","isExpanded":true,"id":"tasks_explained","url":"/docs/transformers/tasks_explained"},{"title":"The Transformer model family","isExpanded":true,"id":"model_summary","url":"/docs/transformers/model_summary"},{"title":"Summary of the tokenizers","isExpanded":true,"id":"tokenizer_summary","url":"/docs/transformers/tokenizer_summary"},{"title":"Attention mechanisms","isExpanded":true,"id":"attention","url":"/docs/transformers/attention"},{"title":"Padding and truncation","isExpanded":true,"id":"pad_truncation","url":"/docs/transformers/pad_truncation"},{"title":"BERTology","isExpanded":true,"id":"bertology","url":"/docs/transformers/bertology"},{"title":"Perplexity of fixed-length models","isExpanded":true,"id":"perplexity","url":"/docs/transformers/perplexity"},{"title":"Pipelines for webserver inference","isExpanded":true,"id":"pipeline_webserver","url":"/docs/transformers/pipeline_webserver"}]},{"title":"API","isExpanded":true,"sections":[{"title":"Main Classes","isExpanded":true,"sections":[{"title":"Agents and Tools","isExpanded":true,"id":"main_classes/agent","url":"/docs/transformers/main_classes/agent"},{"title":"Auto Classes","isExpanded":true,"id":"model_doc/auto","url":"/docs/transformers/model_doc/auto"},{"title":"Callbacks","isExpanded":true,"id":"main_classes/callback","url":"/docs/transformers/main_classes/callback"},{"title":"Configuration","isExpanded":true,"id":"main_classes/configuration","url":"/docs/transformers/main_classes/configuration"},{"title":"Data Collator","isExpanded":true,"id":"main_classes/data_collator","url":"/docs/transformers/main_classes/data_collator"},{"title":"Keras callbacks","isExpanded":true,"id":"main_classes/keras_callbacks","url":"/docs/transformers/main_classes/keras_callbacks"},{"title":"Logging","isExpanded":true,"id":"main_classes/logging","url":"/docs/transformers/main_classes/logging"},{"title":"Models","isExpanded":true,"id":"main_classes/model","url":"/docs/transformers/main_classes/model"},{"title":"Text Generation","isExpanded":true,"id":"main_classes/text_generation","url":"/docs/transformers/main_classes/text_generation"},{"title":"ONNX","isExpanded":true,"id":"main_classes/onnx","url":"/docs/transformers/main_classes/onnx"},{"title":"Optimization","isExpanded":true,"id":"main_classes/optimizer_schedules","url":"/docs/transformers/main_classes/optimizer_schedules"},{"title":"Model outputs","isExpanded":true,"id":"main_classes/output","url":"/docs/transformers/main_classes/output"},{"title":"Pipelines","isExpanded":true,"id":"main_classes/pipelines","url":"/docs/transformers/main_classes/pipelines"},{"title":"Processors","isExpanded":true,"id":"main_classes/processors","url":"/docs/transformers/main_classes/processors"},{"title":"Quantization","isExpanded":true,"id":"main_classes/quantization","url":"/docs/transformers/main_classes/quantization"},{"title":"Tokenizer","isExpanded":true,"id":"main_classes/tokenizer","url":"/docs/transformers/main_classes/tokenizer"},{"title":"Trainer","isExpanded":true,"id":"main_classes/trainer","url":"/docs/transformers/main_classes/trainer"},{"title":"DeepSpeed Integration","isExpanded":true,"id":"main_classes/deepspeed","url":"/docs/transformers/main_classes/deepspeed"},{"title":"Feature Extractor","isExpanded":true,"id":"main_classes/feature_extractor","url":"/docs/transformers/main_classes/feature_extractor"},{"title":"Image Processor","isExpanded":true,"id":"main_classes/image_processor","url":"/docs/transformers/main_classes/image_processor"}]},{"title":"Models","isExpanded":true,"sections":[{"title":"Text models","isExpanded":false,"sections":[{"title":"ALBERT","id":"model_doc/albert","url":"/docs/transformers/model_doc/albert"},{"title":"BART","id":"model_doc/bart","url":"/docs/transformers/model_doc/bart"},{"title":"BARThez","id":"model_doc/barthez","url":"/docs/transformers/model_doc/barthez"},{"title":"BARTpho","id":"model_doc/bartpho","url":"/docs/transformers/model_doc/bartpho"},{"title":"BERT","id":"model_doc/bert","url":"/docs/transformers/model_doc/bert"},{"title":"BertGeneration","id":"model_doc/bert-generation","url":"/docs/transformers/model_doc/bert-generation"},{"title":"BertJapanese","id":"model_doc/bert-japanese","url":"/docs/transformers/model_doc/bert-japanese"},{"title":"Bertweet","id":"model_doc/bertweet","url":"/docs/transformers/model_doc/bertweet"},{"title":"BigBird","id":"model_doc/big_bird","url":"/docs/transformers/model_doc/big_bird"},{"title":"BigBirdPegasus","id":"model_doc/bigbird_pegasus","url":"/docs/transformers/model_doc/bigbird_pegasus"},{"title":"BioGpt","id":"model_doc/biogpt","url":"/docs/transformers/model_doc/biogpt"},{"title":"Blenderbot","id":"model_doc/blenderbot","url":"/docs/transformers/model_doc/blenderbot"},{"title":"Blenderbot Small","id":"model_doc/blenderbot-small","url":"/docs/transformers/model_doc/blenderbot-small"},{"title":"BLOOM","id":"model_doc/bloom","url":"/docs/transformers/model_doc/bloom"},{"title":"BORT","id":"model_doc/bort","url":"/docs/transformers/model_doc/bort"},{"title":"ByT5","id":"model_doc/byt5","url":"/docs/transformers/model_doc/byt5"},{"title":"CamemBERT","id":"model_doc/camembert","url":"/docs/transformers/model_doc/camembert"},{"title":"CANINE","id":"model_doc/canine","url":"/docs/transformers/model_doc/canine"},{"title":"CodeGen","id":"model_doc/codegen","url":"/docs/transformers/model_doc/codegen"},{"title":"ConvBERT","id":"model_doc/convbert","url":"/docs/transformers/model_doc/convbert"},{"title":"CPM","id":"model_doc/cpm","url":"/docs/transformers/model_doc/cpm"},{"title":"CPMANT","id":"model_doc/cpmant","url":"/docs/transformers/model_doc/cpmant"},{"title":"CTRL","id":"model_doc/ctrl","url":"/docs/transformers/model_doc/ctrl"},{"title":"DeBERTa","id":"model_doc/deberta","url":"/docs/transformers/model_doc/deberta"},{"title":"DeBERTa-v2","id":"model_doc/deberta-v2","url":"/docs/transformers/model_doc/deberta-v2"},{"title":"DialoGPT","id":"model_doc/dialogpt","url":"/docs/transformers/model_doc/dialogpt"},{"title":"DistilBERT","id":"model_doc/distilbert","url":"/docs/transformers/model_doc/distilbert"},{"title":"DPR","id":"model_doc/dpr","url":"/docs/transformers/model_doc/dpr"},{"title":"ELECTRA","id":"model_doc/electra","url":"/docs/transformers/model_doc/electra"},{"title":"Encoder Decoder Models","id":"model_doc/encoder-decoder","url":"/docs/transformers/model_doc/encoder-decoder"},{"title":"ERNIE","id":"model_doc/ernie","url":"/docs/transformers/model_doc/ernie"},{"title":"ErnieM","id":"model_doc/ernie_m","url":"/docs/transformers/model_doc/ernie_m"},{"title":"ESM","id":"model_doc/esm","url":"/docs/transformers/model_doc/esm"},{"title":"FLAN-T5","id":"model_doc/flan-t5","url":"/docs/transformers/model_doc/flan-t5"},{"title":"FLAN-UL2","id":"model_doc/flan-ul2","url":"/docs/transformers/model_doc/flan-ul2"},{"title":"FlauBERT","id":"model_doc/flaubert","url":"/docs/transformers/model_doc/flaubert"},{"title":"FNet","id":"model_doc/fnet","url":"/docs/transformers/model_doc/fnet"},{"title":"FSMT","id":"model_doc/fsmt","url":"/docs/transformers/model_doc/fsmt"},{"title":"Funnel Transformer","id":"model_doc/funnel","url":"/docs/transformers/model_doc/funnel"},{"title":"GPT","id":"model_doc/openai-gpt","url":"/docs/transformers/model_doc/openai-gpt"},{"title":"GPT Neo","id":"model_doc/gpt_neo","url":"/docs/transformers/model_doc/gpt_neo"},{"title":"GPT NeoX","id":"model_doc/gpt_neox","url":"/docs/transformers/model_doc/gpt_neox"},{"title":"GPT NeoX Japanese","id":"model_doc/gpt_neox_japanese","url":"/docs/transformers/model_doc/gpt_neox_japanese"},{"title":"GPT-J","id":"model_doc/gptj","url":"/docs/transformers/model_doc/gptj"},{"title":"GPT2","id":"model_doc/gpt2","url":"/docs/transformers/model_doc/gpt2"},{"title":"GPTBigCode","id":"model_doc/gpt_bigcode","url":"/docs/transformers/model_doc/gpt_bigcode"},{"title":"GPTSAN Japanese","id":"model_doc/gptsan-japanese","url":"/docs/transformers/model_doc/gptsan-japanese"},{"title":"GPTSw3","id":"model_doc/gpt-sw3","url":"/docs/transformers/model_doc/gpt-sw3"},{"title":"HerBERT","id":"model_doc/herbert","url":"/docs/transformers/model_doc/herbert"},{"title":"I-BERT","id":"model_doc/ibert","url":"/docs/transformers/model_doc/ibert"},{"title":"Jukebox","id":"model_doc/jukebox","url":"/docs/transformers/model_doc/jukebox"},{"title":"LED","id":"model_doc/led","url":"/docs/transformers/model_doc/led"},{"title":"LLaMA","id":"model_doc/llama","url":"/docs/transformers/model_doc/llama"},{"title":"Longformer","id":"model_doc/longformer","url":"/docs/transformers/model_doc/longformer"},{"title":"LongT5","id":"model_doc/longt5","url":"/docs/transformers/model_doc/longt5"},{"title":"LUKE","id":"model_doc/luke","url":"/docs/transformers/model_doc/luke"},{"title":"M2M100","id":"model_doc/m2m_100","url":"/docs/transformers/model_doc/m2m_100"},{"title":"MarianMT","id":"model_doc/marian","url":"/docs/transformers/model_doc/marian"},{"title":"MarkupLM","id":"model_doc/markuplm","url":"/docs/transformers/model_doc/markuplm"},{"title":"MBart and MBart-50","id":"model_doc/mbart","url":"/docs/transformers/model_doc/mbart"},{"title":"MEGA","id":"model_doc/mega","url":"/docs/transformers/model_doc/mega"},{"title":"MegatronBERT","id":"model_doc/megatron-bert","url":"/docs/transformers/model_doc/megatron-bert"},{"title":"MegatronGPT2","id":"model_doc/megatron_gpt2","url":"/docs/transformers/model_doc/megatron_gpt2"},{"title":"mLUKE","id":"model_doc/mluke","url":"/docs/transformers/model_doc/mluke"},{"title":"MobileBERT","id":"model_doc/mobilebert","url":"/docs/transformers/model_doc/mobilebert"},{"title":"MPNet","id":"model_doc/mpnet","url":"/docs/transformers/model_doc/mpnet"},{"title":"MT5","id":"model_doc/mt5","url":"/docs/transformers/model_doc/mt5"},{"title":"MVP","id":"model_doc/mvp","url":"/docs/transformers/model_doc/mvp"},{"title":"NEZHA","id":"model_doc/nezha","url":"/docs/transformers/model_doc/nezha"},{"title":"NLLB","id":"model_doc/nllb","url":"/docs/transformers/model_doc/nllb"},{"title":"NLLB-MoE","id":"model_doc/nllb-moe","url":"/docs/transformers/model_doc/nllb-moe"},{"title":"Nyströmformer","id":"model_doc/nystromformer","url":"/docs/transformers/model_doc/nystromformer"},{"title":"Open-Llama","id":"model_doc/open-llama","url":"/docs/transformers/model_doc/open-llama"},{"title":"OPT","id":"model_doc/opt","url":"/docs/transformers/model_doc/opt"},{"title":"Pegasus","id":"model_doc/pegasus","url":"/docs/transformers/model_doc/pegasus"},{"title":"PEGASUS-X","id":"model_doc/pegasus_x","url":"/docs/transformers/model_doc/pegasus_x"},{"title":"PhoBERT","id":"model_doc/phobert","url":"/docs/transformers/model_doc/phobert"},{"title":"PLBart","id":"model_doc/plbart","url":"/docs/transformers/model_doc/plbart"},{"title":"ProphetNet","id":"model_doc/prophetnet","url":"/docs/transformers/model_doc/prophetnet"},{"title":"QDQBert","id":"model_doc/qdqbert","url":"/docs/transformers/model_doc/qdqbert"},{"title":"RAG","id":"model_doc/rag","url":"/docs/transformers/model_doc/rag"},{"title":"REALM","id":"model_doc/realm","url":"/docs/transformers/model_doc/realm"},{"title":"Reformer","id":"model_doc/reformer","url":"/docs/transformers/model_doc/reformer"},{"title":"RemBERT","id":"model_doc/rembert","url":"/docs/transformers/model_doc/rembert"},{"title":"RetriBERT","id":"model_doc/retribert","url":"/docs/transformers/model_doc/retribert"},{"title":"RoBERTa","id":"model_doc/roberta","url":"/docs/transformers/model_doc/roberta"},{"title":"RoBERTa-PreLayerNorm","id":"model_doc/roberta-prelayernorm","url":"/docs/transformers/model_doc/roberta-prelayernorm"},{"title":"RoCBert","id":"model_doc/roc_bert","url":"/docs/transformers/model_doc/roc_bert"},{"title":"RoFormer","id":"model_doc/roformer","url":"/docs/transformers/model_doc/roformer"},{"title":"RWKV","id":"model_doc/rwkv","url":"/docs/transformers/model_doc/rwkv"},{"title":"Splinter","id":"model_doc/splinter","url":"/docs/transformers/model_doc/splinter"},{"title":"SqueezeBERT","id":"model_doc/squeezebert","url":"/docs/transformers/model_doc/squeezebert"},{"title":"SwitchTransformers","id":"model_doc/switch_transformers","url":"/docs/transformers/model_doc/switch_transformers"},{"title":"T5","id":"model_doc/t5","url":"/docs/transformers/model_doc/t5"},{"title":"T5v1.1","id":"model_doc/t5v1.1","url":"/docs/transformers/model_doc/t5v1.1"},{"title":"TAPEX","id":"model_doc/tapex","url":"/docs/transformers/model_doc/tapex"},{"title":"Transformer XL","id":"model_doc/transfo-xl","url":"/docs/transformers/model_doc/transfo-xl"},{"title":"UL2","id":"model_doc/ul2","url":"/docs/transformers/model_doc/ul2"},{"title":"X-MOD","id":"model_doc/xmod","url":"/docs/transformers/model_doc/xmod"},{"title":"XGLM","id":"model_doc/xglm","url":"/docs/transformers/model_doc/xglm"},{"title":"XLM","id":"model_doc/xlm","url":"/docs/transformers/model_doc/xlm"},{"title":"XLM-ProphetNet","id":"model_doc/xlm-prophetnet","url":"/docs/transformers/model_doc/xlm-prophetnet"},{"title":"XLM-RoBERTa","id":"model_doc/xlm-roberta","url":"/docs/transformers/model_doc/xlm-roberta"},{"title":"XLM-RoBERTa-XL","id":"model_doc/xlm-roberta-xl","url":"/docs/transformers/model_doc/xlm-roberta-xl"},{"title":"XLM-V","id":"model_doc/xlm-v","url":"/docs/transformers/model_doc/xlm-v"},{"title":"XLNet","id":"model_doc/xlnet","url":"/docs/transformers/model_doc/xlnet"},{"title":"YOSO","id":"model_doc/yoso","url":"/docs/transformers/model_doc/yoso"}]},{"title":"Vision models","isExpanded":false,"sections":[{"title":"BEiT","id":"model_doc/beit","url":"/docs/transformers/model_doc/beit"},{"title":"BiT","id":"model_doc/bit","url":"/docs/transformers/model_doc/bit"},{"title":"Conditional DETR","id":"model_doc/conditional_detr","url":"/docs/transformers/model_doc/conditional_detr"},{"title":"ConvNeXT","id":"model_doc/convnext","url":"/docs/transformers/model_doc/convnext"},{"title":"ConvNeXTV2","id":"model_doc/convnextv2","url":"/docs/transformers/model_doc/convnextv2"},{"title":"CvT","id":"model_doc/cvt","url":"/docs/transformers/model_doc/cvt"},{"title":"Deformable DETR","id":"model_doc/deformable_detr","url":"/docs/transformers/model_doc/deformable_detr"},{"title":"DeiT","id":"model_doc/deit","url":"/docs/transformers/model_doc/deit"},{"title":"DETA","id":"model_doc/deta","url":"/docs/transformers/model_doc/deta"},{"title":"DETR","id":"model_doc/detr","url":"/docs/transformers/model_doc/detr"},{"title":"DiNAT","id":"model_doc/dinat","url":"/docs/transformers/model_doc/dinat"},{"title":"DiT","id":"model_doc/dit","url":"/docs/transformers/model_doc/dit"},{"title":"DPT","id":"model_doc/dpt","url":"/docs/transformers/model_doc/dpt"},{"title":"EfficientFormer","id":"model_doc/efficientformer","url":"/docs/transformers/model_doc/efficientformer"},{"title":"EfficientNet","id":"model_doc/efficientnet","url":"/docs/transformers/model_doc/efficientnet"},{"title":"FocalNet","id":"model_doc/focalnet","url":"/docs/transformers/model_doc/focalnet"},{"title":"GLPN","id":"model_doc/glpn","url":"/docs/transformers/model_doc/glpn"},{"title":"ImageGPT","id":"model_doc/imagegpt","url":"/docs/transformers/model_doc/imagegpt"},{"title":"LeViT","id":"model_doc/levit","url":"/docs/transformers/model_doc/levit"},{"title":"Mask2Former","id":"model_doc/mask2former","url":"/docs/transformers/model_doc/mask2former"},{"title":"MaskFormer","id":"model_doc/maskformer","url":"/docs/transformers/model_doc/maskformer"},{"title":"MobileNetV1","id":"model_doc/mobilenet_v1","url":"/docs/transformers/model_doc/mobilenet_v1"},{"title":"MobileNetV2","id":"model_doc/mobilenet_v2","url":"/docs/transformers/model_doc/mobilenet_v2"},{"title":"MobileViT","id":"model_doc/mobilevit","url":"/docs/transformers/model_doc/mobilevit"},{"title":"MobileViTV2","id":"model_doc/mobilevitv2","url":"/docs/transformers/model_doc/mobilevitv2"},{"title":"NAT","id":"model_doc/nat","url":"/docs/transformers/model_doc/nat"},{"title":"PoolFormer","id":"model_doc/poolformer","url":"/docs/transformers/model_doc/poolformer"},{"title":"RegNet","id":"model_doc/regnet","url":"/docs/transformers/model_doc/regnet"},{"title":"ResNet","id":"model_doc/resnet","url":"/docs/transformers/model_doc/resnet"},{"title":"SegFormer","id":"model_doc/segformer","url":"/docs/transformers/model_doc/segformer"},{"title":"SwiftFormer","id":"model_doc/swiftformer","url":"/docs/transformers/model_doc/swiftformer"},{"title":"Swin Transformer","id":"model_doc/swin","url":"/docs/transformers/model_doc/swin"},{"title":"Swin Transformer V2","id":"model_doc/swinv2","url":"/docs/transformers/model_doc/swinv2"},{"title":"Swin2SR","id":"model_doc/swin2sr","url":"/docs/transformers/model_doc/swin2sr"},{"title":"Table Transformer","id":"model_doc/table-transformer","url":"/docs/transformers/model_doc/table-transformer"},{"title":"TimeSformer","id":"model_doc/timesformer","url":"/docs/transformers/model_doc/timesformer"},{"title":"UperNet","id":"model_doc/upernet","url":"/docs/transformers/model_doc/upernet"},{"title":"VAN","id":"model_doc/van","url":"/docs/transformers/model_doc/van"},{"title":"VideoMAE","id":"model_doc/videomae","url":"/docs/transformers/model_doc/videomae"},{"title":"Vision Transformer (ViT)","id":"model_doc/vit","url":"/docs/transformers/model_doc/vit"},{"title":"ViT Hybrid","id":"model_doc/vit_hybrid","url":"/docs/transformers/model_doc/vit_hybrid"},{"title":"ViTMAE","id":"model_doc/vit_mae","url":"/docs/transformers/model_doc/vit_mae"},{"title":"ViTMSN","id":"model_doc/vit_msn","url":"/docs/transformers/model_doc/vit_msn"},{"title":"YOLOS","id":"model_doc/yolos","url":"/docs/transformers/model_doc/yolos"}]},{"title":"Audio models","isExpanded":false,"sections":[{"title":"Audio Spectrogram Transformer","id":"model_doc/audio-spectrogram-transformer","url":"/docs/transformers/model_doc/audio-spectrogram-transformer"},{"title":"CLAP","id":"model_doc/clap","url":"/docs/transformers/model_doc/clap"},{"title":"Hubert","id":"model_doc/hubert","url":"/docs/transformers/model_doc/hubert"},{"title":"MCTCT","id":"model_doc/mctct","url":"/docs/transformers/model_doc/mctct"},{"title":"MMS","id":"model_doc/mms","url":"/docs/transformers/model_doc/mms"},{"title":"SEW","id":"model_doc/sew","url":"/docs/transformers/model_doc/sew"},{"title":"SEW-D","id":"model_doc/sew-d","url":"/docs/transformers/model_doc/sew-d"},{"title":"Speech2Text","id":"model_doc/speech_to_text","url":"/docs/transformers/model_doc/speech_to_text"},{"title":"Speech2Text2","id":"model_doc/speech_to_text_2","url":"/docs/transformers/model_doc/speech_to_text_2"},{"title":"SpeechT5","id":"model_doc/speecht5","url":"/docs/transformers/model_doc/speecht5"},{"title":"UniSpeech","id":"model_doc/unispeech","url":"/docs/transformers/model_doc/unispeech"},{"title":"UniSpeech-SAT","id":"model_doc/unispeech-sat","url":"/docs/transformers/model_doc/unispeech-sat"},{"title":"Wav2Vec2","id":"model_doc/wav2vec2","url":"/docs/transformers/model_doc/wav2vec2"},{"title":"Wav2Vec2-Conformer","id":"model_doc/wav2vec2-conformer","url":"/docs/transformers/model_doc/wav2vec2-conformer"},{"title":"Wav2Vec2Phoneme","id":"model_doc/wav2vec2_phoneme","url":"/docs/transformers/model_doc/wav2vec2_phoneme"},{"title":"WavLM","id":"model_doc/wavlm","url":"/docs/transformers/model_doc/wavlm"},{"title":"Whisper","id":"model_doc/whisper","url":"/docs/transformers/model_doc/whisper"},{"title":"XLS-R","id":"model_doc/xls_r","url":"/docs/transformers/model_doc/xls_r"},{"title":"XLSR-Wav2Vec2","id":"model_doc/xlsr_wav2vec2","url":"/docs/transformers/model_doc/xlsr_wav2vec2"}]},{"title":"Multimodal models","isExpanded":false,"sections":[{"title":"ALIGN","id":"model_doc/align","url":"/docs/transformers/model_doc/align"},{"title":"AltCLIP","id":"model_doc/altclip","url":"/docs/transformers/model_doc/altclip"},{"title":"BLIP","id":"model_doc/blip","url":"/docs/transformers/model_doc/blip"},{"title":"BLIP-2","id":"model_doc/blip-2","url":"/docs/transformers/model_doc/blip-2"},{"title":"BridgeTower","id":"model_doc/bridgetower","url":"/docs/transformers/model_doc/bridgetower"},{"title":"Chinese-CLIP","id":"model_doc/chinese_clip","url":"/docs/transformers/model_doc/chinese_clip"},{"title":"CLIP","id":"model_doc/clip","url":"/docs/transformers/model_doc/clip"},{"title":"CLIPSeg","id":"model_doc/clipseg","url":"/docs/transformers/model_doc/clipseg"},{"title":"Data2Vec","id":"model_doc/data2vec","url":"/docs/transformers/model_doc/data2vec"},{"title":"DePlot","id":"model_doc/deplot","url":"/docs/transformers/model_doc/deplot"},{"title":"Donut","id":"model_doc/donut","url":"/docs/transformers/model_doc/donut"},{"title":"FLAVA","id":"model_doc/flava","url":"/docs/transformers/model_doc/flava"},{"title":"GIT","id":"model_doc/git","url":"/docs/transformers/model_doc/git"},{"title":"GroupViT","id":"model_doc/groupvit","url":"/docs/transformers/model_doc/groupvit"},{"title":"LayoutLM","id":"model_doc/layoutlm","url":"/docs/transformers/model_doc/layoutlm"},{"title":"LayoutLMV2","id":"model_doc/layoutlmv2","url":"/docs/transformers/model_doc/layoutlmv2"},{"title":"LayoutLMV3","id":"model_doc/layoutlmv3","url":"/docs/transformers/model_doc/layoutlmv3"},{"title":"LayoutXLM","id":"model_doc/layoutxlm","url":"/docs/transformers/model_doc/layoutxlm"},{"title":"LiLT","id":"model_doc/lilt","url":"/docs/transformers/model_doc/lilt"},{"title":"LXMERT","id":"model_doc/lxmert","url":"/docs/transformers/model_doc/lxmert"},{"title":"MatCha","id":"model_doc/matcha","url":"/docs/transformers/model_doc/matcha"},{"title":"MGP-STR","id":"model_doc/mgp-str","url":"/docs/transformers/model_doc/mgp-str"},{"title":"OneFormer","id":"model_doc/oneformer","url":"/docs/transformers/model_doc/oneformer"},{"title":"OWL-ViT","id":"model_doc/owlvit","url":"/docs/transformers/model_doc/owlvit"},{"title":"Perceiver","id":"model_doc/perceiver","url":"/docs/transformers/model_doc/perceiver"},{"title":"Pix2Struct","id":"model_doc/pix2struct","url":"/docs/transformers/model_doc/pix2struct"},{"title":"Segment Anything","id":"model_doc/sam","url":"/docs/transformers/model_doc/sam"},{"title":"Speech Encoder Decoder Models","id":"model_doc/speech-encoder-decoder","url":"/docs/transformers/model_doc/speech-encoder-decoder"},{"title":"TAPAS","id":"model_doc/tapas","url":"/docs/transformers/model_doc/tapas"},{"title":"TrOCR","id":"model_doc/trocr","url":"/docs/transformers/model_doc/trocr"},{"title":"TVLT","id":"model_doc/tvlt","url":"/docs/transformers/model_doc/tvlt"},{"title":"ViLT","id":"model_doc/vilt","url":"/docs/transformers/model_doc/vilt"},{"title":"Vision Encoder Decoder Models","id":"model_doc/vision-encoder-decoder","url":"/docs/transformers/model_doc/vision-encoder-decoder"},{"title":"Vision Text Dual Encoder","id":"model_doc/vision-text-dual-encoder","url":"/docs/transformers/model_doc/vision-text-dual-encoder"},{"title":"VisualBERT","id":"model_doc/visual_bert","url":"/docs/transformers/model_doc/visual_bert"},{"title":"X-CLIP","id":"model_doc/xclip","url":"/docs/transformers/model_doc/xclip"}]},{"title":"Reinforcement learning models","isExpanded":false,"sections":[{"title":"Decision Transformer","id":"model_doc/decision_transformer","url":"/docs/transformers/model_doc/decision_transformer"},{"title":"Trajectory Transformer","id":"model_doc/trajectory_transformer","url":"/docs/transformers/model_doc/trajectory_transformer"}]},{"title":"Time series models","isExpanded":false,"sections":[{"title":"Autoformer","id":"model_doc/autoformer","url":"/docs/transformers/model_doc/autoformer"},{"title":"Informer","id":"model_doc/informer","url":"/docs/transformers/model_doc/informer"},{"title":"Time Series Transformer","id":"model_doc/time_series_transformer","url":"/docs/transformers/model_doc/time_series_transformer"}]},{"title":"Graph models","isExpanded":false,"sections":[{"title":"Graphormer","id":"model_doc/graphormer","url":"/docs/transformers/model_doc/graphormer"}]}]},{"title":"Internal Helpers","isExpanded":true,"sections":[{"title":"Custom Layers and Utilities","isExpanded":true,"id":"internal/modeling_utils","url":"/docs/transformers/internal/modeling_utils"},{"title":"Utilities for pipelines","isExpanded":true,"id":"internal/pipelines_utils","url":"/docs/transformers/internal/pipelines_utils"},{"title":"Utilities for Tokenizers","isExpanded":true,"id":"internal/tokenization_utils","url":"/docs/transformers/internal/tokenization_utils"},{"title":"Utilities for Trainer","isExpanded":true,"id":"internal/trainer_utils","url":"/docs/transformers/internal/trainer_utils"},{"title":"Utilities for Generation","isExpanded":true,"id":"internal/generation_utils","url":"/docs/transformers/internal/generation_utils"},{"title":"Utilities for Image Processors","isExpanded":true,"id":"internal/image_processing_utils","url":"/docs/transformers/internal/image_processing_utils"},{"title":"Utilities for Audio processing","isExpanded":true,"id":"internal/audio_utils","url":"/docs/transformers/internal/audio_utils"},{"title":"General Utilities","isExpanded":true,"id":"internal/file_utils","url":"/docs/transformers/internal/file_utils"},{"title":"Utilities for Time Series","isExpanded":true,"id":"internal/time_series_utils","url":"/docs/transformers/internal/time_series_utils"}]}]}],"chapterId":"tasks/zero_shot_image_classification","docType":"docs","isLoggedIn":false,"lang":"en","langs":["de","en","es","fr","it","ko","pt","zh"],"library":"transformers","theme":"light","version":"v4.30.0","versions":[{"version":"main"},{"version":"v4.30.0"},{"version":"v4.29.1"},{"version":"v4.29.0"},{"version":"v4.28.1"},{"version":"v4.28.0"},{"version":"v4.27.2"},{"version":"v4.27.1"},{"version":"v4.27.0"},{"version":"v4.26.1"},{"version":"v4.26.0"},{"version":"v4.25.1"},{"version":"v4.24.0"},{"version":"v4.23.1"},{"version":"v4.23.0"},{"version":"v4.22.2"},{"version":"v4.22.1"},{"version":"v4.22.0"},{"version":"v4.21.3"},{"version":"v4.21.2"},{"version":"v4.21.1"},{"version":"v4.21.0"},{"version":"v4.20.1"},{"version":"v4.20.0"},{"version":"v4.19.4"},{"version":"v4.19.3"},{"version":"v4.19.2"},{"version":"v4.19.0"},{"version":"v4.18.0"},{"version":"v4.17.0"},{"version":"v4.16.2"},{"version":"v4.16.1"},{"version":"v4.16.0"},{"version":"v4.15.0"},{"version":"v4.14.1"},{"version":"v4.13.0"},{"sphinx":true,"version":"v4.12.5"},{"sphinx":true,"version":"v4.12.4"},{"sphinx":true,"version":"v4.12.2"},{"sphinx":true,"version":"v4.12.1"},{"sphinx":true,"version":"v4.12.0"},{"sphinx":true,"version":"v4.11.3"},{"sphinx":true,"version":"v4.11.2"},{"sphinx":true,"version":"v4.11.1"},{"sphinx":true,"version":"v4.11.0"},{"sphinx":true,"version":"v4.10.1"},{"sphinx":true,"version":"v4.10.0"},{"sphinx":true,"version":"v4.9.2"},{"sphinx":true,"version":"v4.9.1"},{"sphinx":true,"version":"v4.9.0"},{"sphinx":true,"version":"v4.8.2"},{"sphinx":true,"version":"v4.8.1"},{"sphinx":true,"version":"v4.8.0"},{"sphinx":true,"version":"v4.7.0"},{"sphinx":true,"version":"v4.6.0"},{"sphinx":true,"version":"v4.5.1"},{"sphinx":true,"version":"v4.5.0"},{"sphinx":true,"version":"v4.4.2"},{"sphinx":true,"version":"v4.4.1"},{"sphinx":true,"version":"v4.4.0"},{"sphinx":true,"version":"v4.3.3"},{"sphinx":true,"version":"v4.3.2"},{"sphinx":true,"version":"v4.3.1"},{"sphinx":true,"version":"v4.3.0"},{"sphinx":true,"version":"v4.2.2"},{"sphinx":true,"version":"v4.2.1"},{"sphinx":true,"version":"v4.2.0"},{"sphinx":true,"version":"v4.1.1"},{"sphinx":true,"version":"v4.1.0"},{"sphinx":true,"version":"v4.0.1"},{"sphinx":true,"version":"v4.0.0"},{"sphinx":true,"version":"v3.5.1"},{"sphinx":true,"version":"v3.5.0"},{"sphinx":true,"version":"v3.4.0"},{"sphinx":true,"version":"v3.3.1"},{"sphinx":true,"version":"v3.3.0"},{"sphinx":true,"version":"v3.2.0"},{"sphinx":true,"version":"v3.1.0"},{"sphinx":true,"version":"v3.0.2"},{"sphinx":true,"version":"v3.0.1"},{"sphinx":true,"version":"v3.0.0"},{"sphinx":true,"version":"v2.11.0"},{"sphinx":true,"version":"v2.10.0"},{"sphinx":true,"version":"v2.9.1"},{"sphinx":true,"version":"v2.9.0"},{"sphinx":true,"version":"v2.8.0"},{"sphinx":true,"version":"v2.7.0"},{"sphinx":true,"version":"v2.6.0"},{"sphinx":true,"version":"v2.5.1"},{"sphinx":true,"version":"v2.5.0"},{"sphinx":true,"version":"v2.4.1"},{"sphinx":true,"version":"v2.4.0"},{"sphinx":true,"version":"v2.3.0"},{"sphinx":true,"version":"v2.2.2"},{"sphinx":true,"version":"v2.2.1"},{"sphinx":true,"version":"v2.2.0"},{"sphinx":true,"version":"v2.1.1"},{"sphinx":true,"version":"v2.0.0"},{"sphinx":true,"version":"v1.2.0"},{"sphinx":true,"version":"v1.1.0"},{"sphinx":true,"version":"v1.0.0"},{"version":"doc-builder-html"}],"title":"Zero-shot image classification"}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Zero-shot image classification</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.30.0</option><option value="2">v4.29.1</option><option value="3">v4.28.1</option><option value="4">v4.27.2</option><option value="5">v4.26.1</option><option value="6">v4.25.1</option><option value="7">v4.24.0</option><option value="8">v4.23.1</option><option value="9">v4.22.2</option><option value="10">v4.21.3</option><option value="11">v4.20.1</option><option value="12">v4.19.4</option><option value="13">v4.18.0</option><option value="14">v4.17.0</option><option value="15">v4.16.2</option><option value="16">v4.15.0</option><option value="17">v4.14.1</option><option value="18">v4.13.0</option><option value="19">v4.12.5</option><option value="20">v4.11.3</option><option value="21">v4.10.1</option><option value="22">v4.9.2</option><option value="23">v4.8.2</option><option value="24">v4.7.0</option><option value="25">v4.6.0</option><option value="26">v4.5.1</option><option value="27">v4.4.2</option><option value="28">v4.3.3</option><option value="29">v4.2.2</option><option value="30">v4.1.1</option><option value="31">v4.0.1</option><option value="32">v3.5.1</option><option value="33">v3.4.0</option><option value="34">v3.3.1</option><option value="35">v3.2.0</option><option value="36">v3.1.0</option><option value="37">v3.0.2</option><option value="38">v2.11.0</option><option value="39">v2.10.0</option><option value="40">v2.9.1</option><option value="41">v2.8.0</option><option value="42">v2.7.0</option><option value="43">v2.6.0</option><option value="44">v2.5.1</option><option value="45">v2.4.1</option><option value="46">v2.3.0</option><option value="47">v2.2.2</option><option value="48">v2.1.1</option><option value="49">v2.0.0</option><option value="50">v1.2.0</option><option value="51">v1.1.0</option><option value="52">v1.0.0</option><option value="53">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 105,251</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/index">🤗 Transformers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/quicktour">Quick tour </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_tutorial">Run inference with pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/autoclass_tutorial">Write portable code with AutoClass </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/preprocessing">Preprocess data </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/training">Fine-tune a pretrained model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/run_scripts">Train with a script </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/accelerate">Set up distributed training with 🤗 Accelerate </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_sharing">Share your model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/transformers_agents">Agents </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/image_classification">Image classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/semantic_segmentation">Semantic segmentation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/video_classification">Video classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/object_detection">Object detection </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/zero_shot_object_detection">Zero-shot object detection </a><a class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/tasks/zero_shot_image_classification">Zero-shot image classification </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/multilingual">Run inference with multilingual models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/generation_strategies">Customize text generation strategy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/create_a_model">Use model-specific APIs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_models">Share a custom model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/sagemaker">Run training on Amazon SageMaker </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/serialization">Export to ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tflite">Export to TFLite </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/torchscript">Export to TorchScript </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/benchmarks">Benchmarks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/notebooks">Notebooks with examples </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/community">Community resources </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/custom_tools">Custom Tools and Prompts </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/performance">Overview </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_one">Training on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_gpu_many">Training on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu">Training on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_cpu_many">Training on many CPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu">Training on TPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_train_special">Training on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_cpu">Inference on CPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_one">Inference on one GPU </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_gpu_many">Inference on many GPUs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_infer_special">Inference on Specialized Hardware </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perf_hardware">Custom hardware for training </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/big_models">Instantiating a big model </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/debugging">Debugging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/hpo_train">Hyperparameter Search using Trainer API </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tf_xla">XLA Integration for TensorFlow Models </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/contributing">How to contribute to transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_model">How to add a model to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/testing">Testing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/philosophy">Philosophy </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/glossary">Glossary </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/task_summary">What 🤗 Transformers can do </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tasks_explained">How 🤗 Transformers solve tasks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/model_summary">The Transformer model family </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/tokenizer_summary">Summary of the tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/attention">Attention mechanisms </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pad_truncation">Padding and truncation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/bertology">BERTology </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/perplexity">Perplexity of fixed-length models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/pipeline_webserver">Pipelines for webserver inference </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/agent">Agents and Tools </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/model_doc/auto">Auto Classes </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/callback">Callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/configuration">Configuration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/data_collator">Data Collator </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/keras_callbacks">Keras callbacks </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/logging">Logging </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/model">Models </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/text_generation">Text Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/onnx">ONNX </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/optimizer_schedules">Optimization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/output">Model outputs </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/pipelines">Pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/processors">Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/quantization">Quantization </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/tokenizer">Tokenizer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/trainer">Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/deepspeed">DeepSpeed Integration </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/feature_extractor">Feature Extractor </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/modeling_utils">Custom Layers and Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/pipelines_utils">Utilities for pipelines </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/tokenization_utils">Utilities for Tokenizers </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/trainer_utils">Utilities for Trainer </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/generation_utils">Utilities for Generation </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/image_processing_utils">Utilities for Image Processors </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/audio_utils">Utilities for Audio processing </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/file_utils">General Utilities </a><a class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div>
<div class="z-1 min-w-0 flex-1">
<div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg">
<div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div>
<p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience
</p>
<div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference
</div></div>
<div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div>
<div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes
</div></div></div>
<div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a>
<p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div>
<div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <h1 class="relative group"><a id="zeroshot-image-classification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-image-classification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Zero-shot image classification</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p>Zero-shot image classification is a task that involves classifying images into different categories using a model that was
not explicitly trained on data containing labeled examples from those specific categories.</p> <p>Traditionally, image classification requires training a model on a specific set of labeled images, and this model learns to
“map” certain image features to labels. When there’s a need to use such model for a classification task that introduces a
new set of labels, fine-tuning is required to “recalibrate” the model.</p> <p>In contrast, zero-shot or open vocabulary image classification models are typically multi-modal models that have been trained on a large
dataset of images and associated descriptions. These models learn aligned vision-language representations that can be used for many downstream tasks including zero-shot image classification.</p> <p>This is a more flexible approach to image classification that allows models to generalize to new and unseen categories
without the need for additional training data and enables users to query images with free-form text descriptions of their target objects .</p> <p>In this guide you’ll learn how to:</p> <ul><li>create a zero-shot image classification pipeline</li> <li>run zero-shot image classification inference by hand</li></ul> <p>Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre>pip install -q transformers</pre></div> <h2 class="relative group"><a id="zeroshot-image-classification-pipeline" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-image-classification-pipeline"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Zero-shot image classification pipeline</span></h2> <p>The simplest way to try out inference with a model supporting zero-shot image classification is to use the corresponding <a href="/docs/transformers/v4.30.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>.
Instantiate a pipeline from a <a href="https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline
<span class="hljs-meta">>>> </span>checkpoint = <span class="hljs-string">"openai/clip-vit-large-patch14"</span>
<span class="hljs-meta">>>> </span>detector = pipeline(model=checkpoint, task=<span class="hljs-string">"zero-shot-image-classification"</span>)</pre></div> <p>Next, choose an image you’d like to classify.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span>url = <span class="hljs-string">"https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640"</span>
<span class="hljs-meta">>>> </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw)
<span class="hljs-meta">>>> </span>image</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"></div> <p>Pass the image and the candidate object labels to the pipeline. Here we pass the image directly; other suitable options
include a local path to an image or an image url.
The candidate labels can be simple words like in this example, or more descriptive.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>predictions = classifier(image, candidate_labels=[<span class="hljs-string">"fox"</span>, <span class="hljs-string">"bear"</span>, <span class="hljs-string">"seagull"</span>, <span class="hljs-string">"owl"</span>])
<span class="hljs-meta">>>> </span>predictions
[{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.9996670484542847</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'owl'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.000199399160919711</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'seagull'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">7.392891711788252e-05</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'fox'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">5.96074532950297e-05</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'bear'</span>}]</pre></div> <h2 class="relative group"><a id="zeroshot-image-classification-by-hand" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-image-classification-by-hand"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Zero-shot image classification by hand</span></h2> <p>Now that you’ve seen how to use the zero-shot image classification pipeline, let’s take a look how you can run zero-shot
image classification manually.</p> <p>Start by loading the model and associated processor from a <a href="https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>.
Here we’ll use the same checkpoint as before:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, AutoModelForZeroShotImageClassification
<span class="hljs-meta">>>> </span>model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint)
<span class="hljs-meta">>>> </span>processor = AutoProcessor.from_pretrained(checkpoint)</pre></div> <p>Let’s take a different image to switch things up.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests
<span class="hljs-meta">>>> </span>url = <span class="hljs-string">"https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640"</span>
<span class="hljs-meta">>>> </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw)
<span class="hljs-meta">>>> </span>image</pre></div> <div class="flex justify-center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"></div> <p>Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the
image for the model by resizing and normalizing it, and a tokenizer that takes care of the text inputs.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span>candidate_labels = [<span class="hljs-string">"tree"</span>, <span class="hljs-string">"car"</span>, <span class="hljs-string">"bike"</span>, <span class="hljs-string">"cat"</span>]
<span class="hljs-meta">>>> </span>inputs = processor(images=image, text=candidate_labels, return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>)</pre></div> <p>Pass the inputs through the model, and post-process the results:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch
<span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad():
<span class="hljs-meta">... </span> outputs = model(**inputs)
<span class="hljs-meta">>>> </span>logits = outputs.logits_per_image[<span class="hljs-number">0</span>]
<span class="hljs-meta">>>> </span>probs = logits.softmax(dim=-<span class="hljs-number">1</span>).numpy()
<span class="hljs-meta">>>> </span>scores = probs.tolist()
<span class="hljs-meta">>>> </span>result = [
<span class="hljs-meta">... </span> {<span class="hljs-string">"score"</span>: score, <span class="hljs-string">"label"</span>: candidate_label}
<span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> score, candidate_label <span class="hljs-keyword">in</span> <span class="hljs-built_in">sorted</span>(<span class="hljs-built_in">zip</span>(probs, candidate_labels), key=<span class="hljs-keyword">lambda</span> x: -x[<span class="hljs-number">0</span>])
<span class="hljs-meta">... </span>]
<span class="hljs-meta">>>> </span>result
[{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.998572</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'car'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.0010570387</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'bike'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.0003393686</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'tree'</span>},
{<span class="hljs-string">'score'</span>: <span class="hljs-number">3.1572064e-05</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'cat'</span>}]</pre></div> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div>
<div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a href="/docs/transformers/tasks/zero_shot_object_detection" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Zero-shot object detection</a>
<a href="/docs/transformers/tasks/monocular_depth_estimation" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Depth estimation<span class="ml-2 translate-y-px">→</span></a></div></div></div>
<div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{"chapter":{"title":"Zero-shot image classification","isExpanded":true,"id":"zeroshot-image-classification","url":"#zeroshot-image-classification","sections":[{"title":"Zero-shot image classification pipeline","isExpanded":true,"id":"zeroshot-image-classification-pipeline","url":"#zeroshot-image-classification-pipeline"},{"title":"Zero-shot image classification by hand","isExpanded":true,"id":"zeroshot-image-classification-by-hand","url":"#zeroshot-image-classification-by-hand"}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#zeroshot-image-classification" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-image-classification"><wbr>Zero-shot image classification</a> <a href="#zeroshot-image-classification-pipeline" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-image-classification-pipeline"><wbr>Zero-shot image classification pipeline</a> <a href="#zeroshot-image-classification-by-hand" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-image-classification-by-hand"><wbr>Zero-shot image classification by hand</a> </nav></div></div></div>
<div id="doc-footer"></div></main>
</div>
<script>
import("/front/build/kube-c0d76de/index.js");
window.moonSha = "kube-c0d76de/";
window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc"}`);
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
<!-- Google analytics v4 -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL";
script.async = true;
document.head.appendChild(script);
window.dataLayer = window.dataLayer || [];
function gtag() {
if (window.dataLayer !== undefined) {
window.dataLayer.push(arguments);
}
}
gtag("js", new Date());
gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/tasks/zero_shot_image_classification" });
/// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages
gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" });
/// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent
/// TODO: ask the user for their consent and update this with gtag('consent', 'update')
}
</script>
<!-- Google Analytics v3 (deprecated) -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
(function (i, s, o, g, r, a, m) {
i["GoogleAnalyticsObject"] = r;
(i[r] =
i[r] ||
function () {
(i[r].q = i[r].q || []).push(arguments);
}),
(i[r].l = 1 * new Date());
(a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]);
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m);
})(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics");
ganalytics("create", "UA-83738774-2", "auto");
ganalytics("send", "pageview", "/docs/transformers/tasks/zero_shot_image_classification");
}
</script>
<iframe name="__privateStripeMetricsController1710" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-93afeeb17bc37e711759584dbfc50d47.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Ftasks%2Fzero_shot_image_classification&title=Zero-shot%20image%20classification&referrer=&muid=4037bbd9-2e77-48ed-83ae-d777504798277a13b8&sid=bd1deff1-2829-44a8-8ce0-c570752fefe9dc49b0&version=6&preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html> | 2023-06-27T19:55:03.696Z |