text
stringlengths
5
58.6k
source
stringclasses
470 values
url
stringlengths
49
167
source_section
stringlengths
0
90
file_type
stringclasses
1 value
id
stringlengths
3
6
RoCBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoCBertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roc_bert.md
https://huggingface.co/docs/transformers/en/model_doc/roc_bert/#rocbertforquestionanswering
#rocbertforquestionanswering
.md
281_12
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/
.md
282_0
The SwiftFormer model was proposed in [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan. The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called 'SwiftFormer' is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2. The abstract from the paper is the following: *Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called "SwiftFormer" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2.* This model was contributed by [shehan97](https://huggingface.co/shehan97). The TensorFlow version was contributed by [joaocmd](https://huggingface.co/joaocmd). The original code can be found [here](https://github.com/Amshaker/SwiftFormer).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#overview
#overview
.md
282_1
This is the configuration class to store the configuration of a [`SwiftFormerModel`]. It is used to instantiate an SwiftFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SwiftFormer [MBZUAI/swiftformer-xs](https://huggingface.co/MBZUAI/swiftformer-xs) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image num_channels (`int`, *optional*, defaults to 3): The number of input channels depths (`List[int]`, *optional*, defaults to `[3, 3, 6, 4]`): Depth of each stage embed_dims (`List[int]`, *optional*, defaults to `[48, 56, 112, 220]`): The embedding dimension at each stage mlp_ratio (`int`, *optional*, defaults to 4): Ratio of size of the hidden dimensionality of an MLP to the dimensionality of its input. downsamples (`List[bool]`, *optional*, defaults to `[True, True, True, True]`): Whether or not to downsample inputs between two stages. hidden_act (`str`, *optional*, defaults to `"gelu"`): The non-linear activation function (string). `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. down_patch_size (`int`, *optional*, defaults to 3): The size of patches in downsampling layers. down_stride (`int`, *optional*, defaults to 2): The stride of convolution kernels in downsampling layers. down_pad (`int`, *optional*, defaults to 1): Padding in downsampling layers. drop_path_rate (`float`, *optional*, defaults to 0.0): Rate at which to increase dropout probability in DropPath. drop_mlp_rate (`float`, *optional*, defaults to 0.0): Dropout rate for the MLP component of SwiftFormer. drop_conv_encoder_rate (`float`, *optional*, defaults to 0.0): Dropout rate for the ConvEncoder component of SwiftFormer. use_layer_scale (`bool`, *optional*, defaults to `True`): Whether to scale outputs from token mixers. layer_scale_init_value (`float`, *optional*, defaults to 1e-05): Factor by which outputs from token mixers are scaled. batch_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the batch normalization layers. Example: ```python >>> from transformers import SwiftFormerConfig, SwiftFormerModel >>> # Initializing a SwiftFormer swiftformer-base-patch16-224 style configuration >>> configuration = SwiftFormerConfig() >>> # Initializing a model (with random weights) from the swiftformer-base-patch16-224 style configuration >>> model = SwiftFormerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerconfig
#swiftformerconfig
.md
282_2
The bare SwiftFormer Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwiftFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformermodel
#swiftformermodel
.md
282_3
SwiftFormer Model transformer with an image classification head on top (e.g. for ImageNet). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwiftFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#swiftformerforimageclassification
#swiftformerforimageclassification
.md
282_4
No docstring available for TFSwiftFormerModel Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#tfswiftformermodel
#tfswiftformermodel
.md
282_5
No docstring available for TFSwiftFormerForImageClassification Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swiftformer.md
https://huggingface.co/docs/transformers/en/model_doc/swiftformer/#tfswiftformerforimageclassification
#tfswiftformerforimageclassification
.md
282_6
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/
.md
283_0
The SeamlessM4T-v2 model was proposed in [Seamless: Multilingual Expressive and Streaming Speech Translation](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/) by the Seamless Communication team from Meta AI. SeamlessM4T-v2 is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. It is an improvement on the [previous version](https://huggingface.co/docs/transformers/main/model_doc/seamless_m4t). For more details on the differences between v1 and v2, refer to section [Difference with SeamlessM4T-v1](#difference-with-seamlessm4t-v1). SeamlessM4T-v2 enables multiple tasks without relying on separate models: - Speech-to-speech translation (S2ST) - Speech-to-text translation (S2TT) - Text-to-speech translation (T2ST) - Text-to-text translation (T2TT) - Automatic speech recognition (ASR) [`SeamlessM4Tv2Model`] can perform all the above tasks, but each task also has its own dedicated sub-model. The abstract from the paper is the following: *Recent advancements in automatic speech translation have dramatically expanded language coverage, improved multimodal capabilities, and enabled a wide range of tasks and functionalities. That said, large-scale automatic speech translation systems today lack key features that help machine-mediated communication feel seamless when compared to human-to-human dialogue. In this work, we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First, we contribute an improved version of the massively multilingual and multimodal SeamlessM4T model—SeamlessM4T v2. This newer model, incorporating an updated UnitY2 framework, was trained on more low-resource language data. The expanded version of SeamlessAlign adds 114,800 hours of automatically aligned data for a total of 76 languages. SeamlessM4T v2 provides the foundation on which our two newest models, SeamlessExpressive and SeamlessStreaming, are initiated. SeamlessExpressive enables translation that preserves vocal styles and prosody. Compared to previous efforts in expressive speech research, our work addresses certain underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for SeamlessStreaming, our model leverages the Efficient Monotonic Multihead Attention (EMMA) mechanism to generate low-latency target translations without waiting for complete source utterances. As the first of its kind, SeamlessStreaming enables simultaneous speech-to-speech/text translation for multiple source and target languages. To understand the performance of these models, we combined novel and modified versions of existing automatic metrics to evaluate prosody, latency, and robustness. For human evaluations, we adapted existing protocols tailored for measuring the most relevant attributes in the preservation of meaning, naturalness, and expressivity. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming effort for multimodal machine translation, a system for the detection and mitigation of added toxicity, a systematic evaluation of gender bias, and an inaudible localized watermarking mechanism designed to dampen the impact of deepfakes. Consequently, we bring major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system that unlocks expressive cross-lingual communication in real-time. In sum, Seamless gives us a pivotal look at the technical foundation needed to turn the Universal Speech Translator from a science fiction concept into a real-world technology. Finally, contributions in this work—including models, code, and a watermark detector—are publicly released and accessible at the link below.*
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#overview
#overview
.md
283_1
In the following example, we'll load an Arabic audio sample and an English text sample and convert them into Russian speech and French text. First, load the processor and a checkpoint of the model: ```python >>> from transformers import AutoProcessor, SeamlessM4Tv2Model >>> processor = AutoProcessor.from_pretrained("facebook/seamless-m4t-v2-large") >>> model = SeamlessM4Tv2Model.from_pretrained("facebook/seamless-m4t-v2-large") ``` You can seamlessly use this model on text or on audio, to generated either translated text or translated audio. Here is how to use the processor to process text and audio: ```python >>> # let's load an audio sample from an Arabic speech corpus >>> from datasets import load_dataset >>> dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True) >>> audio_sample = next(iter(dataset))["audio"] >>> # now, process it >>> audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt") >>> # now, process some English text as well >>> text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#usage
#usage
.md
283_2
[`SeamlessM4Tv2Model`] can *seamlessly* generate text or speech with few or no changes. Let's target Russian voice translation: ```python >>> audio_array_from_text = model.generate(**text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() >>> audio_array_from_audio = model.generate(**audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() ``` With basically the same code, I've translated English text and Arabic speech to Russian speech samples.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#speech
#speech
.md
283_3
Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass `generate_speech=False` to [`SeamlessM4Tv2Model.generate`]. This time, let's translate to French. ```python >>> # from audio >>> output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) >>> # from text >>> output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#text
#text
.md
283_4
[`SeamlessM4Tv2Model`] is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint. For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code: ```python >>> from transformers import SeamlessM4Tv2ForSpeechToSpeech >>> model = SeamlessM4Tv2ForSpeechToSpeech.from_pretrained("facebook/seamless-m4t-v2-large") ``` Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove `generate_speech=False`. ```python >>> from transformers import SeamlessM4Tv2ForTextToText >>> model = SeamlessM4Tv2ForTextToText.from_pretrained("facebook/seamless-m4t-v2-large") ``` Feel free to try out [`SeamlessM4Tv2ForSpeechToText`] and [`SeamlessM4Tv2ForTextToSpeech`] as well.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#1-use-dedicated-models
#1-use-dedicated-models
.md
283_5
You have the possibility to change the speaker used for speech synthesis with the `speaker_id` argument. Some `speaker_id` works better than other for some languages!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#2-change-the-speaker-identity
#2-change-the-speaker-identity
.md
283_6
You can use different [generation strategies](../generation_strategies) for text generation, e.g `.generate(input_ids=input_ids, text_num_beams=4, text_do_sample=True)` which will perform multinomial beam-search decoding on the text model. Note that speech generation only supports greedy - by default - or multinomial sampling, which can be used with e.g. `.generate(..., speech_do_sample=True, speech_temperature=0.6)`.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#3-change-the-generation-strategy
#3-change-the-generation-strategy
.md
283_7
Use `return_intermediate_token_ids=True` with [`SeamlessM4Tv2Model`] to return both speech and text !
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#4-generate-speech-and-text-at-the-same-time
#4-generate-speech-and-text-at-the-same-time
.md
283_8
SeamlessM4T-v2 features a versatile architecture that smoothly handles the sequential generation of text and speech. This setup comprises two sequence-to-sequence (seq2seq) models. The first model translates the input modality into translated text, while the second model generates speech tokens, known as "unit tokens," from the translated text. Each modality has its own dedicated encoder with a unique architecture. Additionally, for speech output, a vocoder inspired by the [HiFi-GAN](https://arxiv.org/abs/2010.05646) architecture is placed on top of the second seq2seq model.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#model-architecture
#model-architecture
.md
283_9
The architecture of this new version differs from the first in a few aspects:
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#difference-with-seamlessm4t-v1
#difference-with-seamlessm4t-v1
.md
283_10
The second seq2seq model, named text-to-unit model, is now non-auto regressive, meaning that it computes units in a **single forward pass**. This achievement is made possible by: - the use of **character-level embeddings**, meaning that each character of the predicted translated text has its own embeddings, which are then used to predict the unit tokens. - the use of an intermediate duration predictor, that predicts speech duration at the **character-level** on the predicted translated text. - the use of a new text-to-unit decoder mixing convolutions and self-attention to handle longer context.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#improvements-on-the-second-pass-model
#improvements-on-the-second-pass-model
.md
283_11
The speech encoder, which is used during the first-pass generation process to predict the translated text, differs mainly from the previous speech encoder through these mechanisms: - the use of chunked attention mask to prevent attention across chunks, ensuring that each position attends only to positions within its own chunk and a fixed number of previous chunks. - the use of relative position embeddings which only considers distance between sequence elements rather than absolute positions. Please refer to [Self-Attentionwith Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155) for more details. - the use of a causal depth-wise convolution instead of a non-causal one.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#difference-in-the-speech-encoder
#difference-in-the-speech-encoder
.md
283_12
Here's how the generation process works: - Input text or speech is processed through its specific encoder. - A decoder creates text tokens in the desired language. - If speech generation is required, the second seq2seq model, generates unit tokens in an non auto-regressive way. - These unit tokens are then passed through the final vocoder to produce the actual speech. This model was contributed by [ylacombe](https://huggingface.co/ylacombe). The original code can be found [here](https://github.com/facebookresearch/seamless_communication).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#generation-process
#generation-process
.md
283_13
The original SeamlessM4Tv2 Model transformer which can be used for every tasks available (S2ST, S2TT, T2TT, T2ST). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. current_modality (`str`, *optional*, defaults to `"text"`): Default modality. Used only to initialize the model. It can be set to `"text"` or `"speech"`. This will be updated automatically according to the modality passed to the forward and generate passes (`input_ids` for text and `input_features` for audio). Methods: generate
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2model
#seamlessm4tv2model
.md
283_14
The text-to-speech SeamlessM4Tv2 Model transformer which can be used for T2ST. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: generate
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2fortexttospeech
#seamlessm4tv2fortexttospeech
.md
283_15
The speech-to-speech SeamlessM4Tv2 Model transformer which can be used for S2ST. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: generate
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2forspeechtospeech
#seamlessm4tv2forspeechtospeech
.md
283_16
The text-to-text SeamlessM4Tv2 Model transformer which can be used for T2TT. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - generate
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2fortexttotext
#seamlessm4tv2fortexttotext
.md
283_17
The speech-to-text SeamlessM4Tv2 Model transformer which can be used for S2TT. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`~SeamlessM4Tv2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - generate
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2forspeechtotext
#seamlessm4tv2forspeechtotext
.md
283_18
This is the configuration class to store the configuration of a [`~SeamlessM4Tv2Model`]. It is used to instantiate an SeamlessM4Tv2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SeamlessM4Tv2 [""](https://huggingface.co/"") architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 256102): Vocabulary size of the text modality of the SeamlessM4Tv2 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`~SeamlessM4Tv2Model`], [`~SeamlessM4Tv2ForTextToSpeech`] or [`~SeamlessM4Tv2ForTextToText`]. t2u_vocab_size (`int`, *optional*, defaults to 10082): Unit vocabulary size of the SeamlessM4Tv2 model. Defines the number of different "unit tokens" that can be represented by the `inputs_ids` passed when calling the Text-To-Units sub-model of [`~SeamlessM4Tv2Model`], [`~SeamlessM4Tv2ForSpeechToSpeech`] or [`~SeamlessM4Tv2ForTextToSpeech`]. char_vocab_size (`int`, *optional*, defaults to 10943): Character vocabulary size of the SeamlessM4Tv2 model. Defines the number of different character tokens that can be represented by the `char_inputs_ids` passed when calling the Text-To-Units sub-model of [`~SeamlessM4Tv2Model`], [`~SeamlessM4Tv2ForSpeechToSpeech`] or [`~SeamlessM4Tv2ForTextToSpeech`]. > Parameters shared across sub-models hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the "intermediate" layers in the architecture. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). max_position_embeddings (`int`, *optional*, defaults to 4096): The maximum sequence length that this model text encoder and decoder might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). is_encoder_decoder (`bool`, *optional*, defaults to `True`): Whether the model is used as an encoder/decoder or not. encoder_layerdrop (`float`, *optional*, defaults to 0.05): The LayerDrop probability for the encoders. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.05): The LayerDrop probability for the decoders. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. activation_function (`str` or `function`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the decoder and feed-forward layers. If string, `"gelu"`, `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, decoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all attention layers. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for all activation layers in the model. scale_embedding (`bool`, *optional*, defaults to `True`): Scale embeddings by diving by sqrt(d_model). > Text encoder and text decoder specific parameters encoder_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer text encoder. encoder_ffn_dim (`int`, *optional*, defaults to 8192): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text encoder. encoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer text encoder. decoder_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer text decoder. decoder_ffn_dim (`int`, *optional*, defaults to 8192): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text decoder. decoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer text decoder. decoder_start_token_id (`int`, *optional*, defaults to 3): If an encoder-decoder model starts decoding with a different token than _bos_, the id of that token. Only applied in the text decoder. max_new_tokens (`int`, *optional*, defaults to 256): The maximum numbers of text tokens to generate, ignoring the number of tokens in the prompt. pad_token_id (`int`, *optional*, defaults to 0): The id of the _padding_ text token. Only applied to the text-decoder model. bos_token_id (`int`, *optional*, defaults to 2): The id of the _beginning-of-stream_ text token. Only applied to the text-decoder model. eos_token_id (`int`, *optional*, defaults to 3): The id of the _end-of-stream_ text token. Only applied to the text-decoder model. > Speech encoder specific parameters speech_encoder_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer speech encoder. speech_encoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer speech encoder. speech_encoder_intermediate_size (`int`, *optional*, defaults to 4096): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer speech encoder. speech_encoder_hidden_act (`str` or `function`, *optional*, defaults to `"swish"`): The non-linear activation function (function or string) in the speech encoder. If string, `"gelu"`, `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported. speech_encoder_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for all layers in the speech encoder. add_adapter (`bool`, *optional*, defaults to `True`): Add an adapter layer on top of the speech encoder. speech_encoder_layerdrop (`float`, *optional*, defaults to 0.1): The LayerDrop probability for the speech encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. feature_projection_input_dim (`int`, *optional*, defaults to 160): Input dimension of the input feature projection of the speech encoder, i.e the dimension after processing input audios with [`SeamlessM4TFeatureExtractor`]. adaptor_kernel_size (`int`, *optional*, defaults to 8): Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`. adaptor_stride (`int`, *optional*, defaults to 8): Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`. adaptor_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all layers in the speech adapter. num_adapter_layers (`int`, *optional*, defaults to 1): Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is True`. position_embeddings_type (`str`, *optional*, defaults to `"relative_key"`): Can be specified to `relative_key`. If left to `None`, no relative position embedding is applied. Only applied to the speech encoder. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). conv_depthwise_kernel_size (`int`, *optional*, defaults to 31): Kernel size of convolutional depthwise 1D layer in Conformer blocks. Only applied to the speech encoder. left_max_position_embeddings (`int`, *optional*, defaults to 64): The left clipping value for relative positions. right_max_position_embeddings (`int`, *optional*, defaults to 8): The right clipping value for relative positions. speech_encoder_chunk_size (`int`, *optional*, defaults to 20000): The size of each attention chunk. speech_encoder_left_chunk_num (`int`, *optional*, defaults to 128): Number of chunks on the left up to which lookahead is allowed. > Text-To-Unit (t2u) model specific parameters t2u_bos_token_id (`int`, *optional*, defaults to 0): The id of the _beginning-of-stream_ unit token. Only applied to the text-to-unit seq2seq model. t2u_pad_token_id (`int`, *optional*, defaults to 1): The id of the _padding_ unit token. Only applied to the text-to-unit seq2seq model. t2u_eos_token_id (`int`, *optional*, defaults to 2): The id of the _end-of-stream_ unit token. Only applied to the text-to-unit seq2seq model. t2u_encoder_layers (`int`, *optional*, defaults to 6): Number of hidden layers in the Transformer text-to-unit encoder. t2u_encoder_ffn_dim (`int`, *optional*, defaults to 8192): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text-to-unit encoder. t2u_encoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer text-to-unit encoder. t2u_decoder_layers (`int`, *optional*, defaults to 6): Number of hidden layers in the Transformer text-to-unit decoder. t2u_decoder_ffn_dim (`int`, *optional*, defaults to 8192): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer text-to-unit decoder. t2u_decoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer text-to-unit decoder. t2u_max_position_embeddings (`int`, *optional*, defaults to 4096): The maximum sequence length that this model text-to-unit component might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). t2u_variance_predictor_embed_dim (`int`, *optional*, defaults to 1024): The projection dimension of the text-to-unit's duration predictor. t2u_variance_predictor_hidden_dim (`int`, *optional*, defaults to 256): Internal dimension of the text-to-unit's duration predictor. t2u_variance_predictor_kernel_size (`int`, *optional*, defaults to 3): Kernel size of the convolutional layers of the text-to-unit's duration predictor. t2u_variance_pred_dropout (`float`, *optional*, defaults to 0.5): The dropout probability of the text-to-unit's duration predictor. > Hifi-Gan Vocoder specific parameters sampling_rate (`int`, *optional*, defaults to 16000): The sampling rate at which the output audio will be generated, expressed in hertz (Hz). upsample_initial_channel (`int`, *optional*, defaults to 512): The number of input channels into the hifi-gan upsampling network. Applies to the vocoder only. upsample_rates (`Tuple[int]` or `List[int]`, *optional*, defaults to `[5, 4, 4, 2, 2]`): A tuple of integers defining the stride of each 1D convolutional layer in the vocoder upsampling network. The length of *upsample_rates* defines the number of convolutional layers and has to match the length of *upsample_kernel_sizes*. Applies to the vocoder only. upsample_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[11, 8, 8, 4, 4]`): A tuple of integers defining the kernel size of each 1D convolutional layer in the vocoder upsampling network. The length of *upsample_kernel_sizes* defines the number of convolutional layers and has to match the length of *upsample_rates*. Applies to the vocoder only. resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 7, 11]`): A tuple of integers defining the kernel sizes of the vocoder 1D convolutional layers in the multi-receptive field fusion (MRF) module. Applies to the vocoder only. resblock_dilation_sizes (`Tuple[Tuple[int]]` or `List[List[int]]`, *optional*, defaults to `[[1, 3, 5], [1, 3, 5], [1, 3, 5]]`): A nested tuple of integers defining the dilation rates of the vocoder dilated 1D convolutional layers in the multi-receptive field fusion (MRF) module. Applies to the vocoder only. leaky_relu_slope (`float`, *optional*, defaults to 0.1): The angle of the negative slope used by the leaky ReLU activation in the vocoder. Applies to the vocoder only. unit_hifi_gan_vocab_size (`int`, *optional*, defaults to 10000): Vocabulary size of the SeamlessM4Tv2 vocoder. Defines the number of different unit tokens that can be represented by the `inputs_ids` passed when calling the vocoder of [`~SeamlessM4Tv2Model`], [`~SeamlessM4Tv2ForSpeechToSpeech`] or [`~SeamlessM4Tv2ForTextToSpeech`]. unit_embed_dim (`int`, *optional*, defaults to 1280): The projection dimension of the input ids given to the hifi-gan vocoder. Applies to the vocoder only. lang_embed_dim (`int`, *optional*, defaults to 256): The projection dimension of the target language given to the hifi-gan vocoder. Applies to the vocoder only. spkr_embed_dim (`int`, *optional*, defaults to 256): The projection dimension of the speaker id given to the hifi-gan vocoder. Applies to the vocoder only. vocoder_num_langs (`int`, *optional*, defaults to 36): Number of langs supported by the vocoder. Might be different from `t2u_num_langs`. vocoder_num_spkrs (`int`, *optional*, defaults to 200): Number of speakers supported by the vocoder. variance_predictor_kernel_size (`int`, *optional*, defaults to 3): Kernel size of the duration predictor. Applies to the vocoder only. var_pred_dropout (`float`, *optional*, defaults to 0.5): The dropout probability of the duration predictor. Applies to the vocoder only. vocoder_offset (`int`, *optional*, defaults to 4): Offset the unit token ids by this number to account for symbol tokens. Applies to the vocoder only. ```python >>> from transformers import SeamlessM4Tv2Model, SeamlessM4Tv2Config >>> # Initializing a SeamlessM4Tv2 "" style configuration >>> configuration = SeamlessM4Tv2Config() >>> # Initializing a model from the "" style configuration >>> model = SeamlessM4Tv2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/seamless_m4t_v2.md
https://huggingface.co/docs/transformers/en/model_doc/seamless_m4t_v2/#seamlessm4tv2config
#seamlessm4tv2config
.md
283_19
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/
.md
284_0
The ViTMSN model was proposed in [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. The paper presents a joint-embedding architecture to match the prototypes of masked patches with that of the unmasked patches. With this setup, their method yields excellent performance in the low-shot and extreme low-shot regimes. The abstract from the paper is the following: *We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark.* <img src="https://i.ibb.co/W6PQMdC/Screenshot-2022-09-13-at-9-08-40-AM.png" alt="drawing" width="600"/> <small> MSN architecture. Taken from the <a href="https://arxiv.org/abs/2204.07141">original paper.</a> </small> This model was contributed by [sayakpaul](https://huggingface.co/sayakpaul). The original code can be found [here](https://github.com/facebookresearch/msn).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#overview
#overview
.md
284_1
- MSN (masked siamese networks) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is to match the prototypes assigned to the unmasked views of the images to that of the masked views of the same images. - The authors have only released pre-trained weights of the backbone (ImageNet-1k pre-training). So, to use that on your own image classification dataset, use the [`ViTMSNForImageClassification`] class which is initialized from [`ViTMSNModel`]. Follow [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) for a detailed tutorial on fine-tuning. - MSN is particularly useful in the low-shot and extreme low-shot regimes. Notably, it achieves 75.7% top-1 accuracy with only 1% of ImageNet-1K labels when fine-tuned.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#usage-tips
#usage-tips
.md
284_2
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import ViTMSNForImageClassification model = ViTMSNForImageClassification.from_pretrained("facebook/vit-msn-base", attn_implementation="sdpa", torch_dtype=torch.float16) ... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `facebook/vit-msn-base` model, we saw the following speedups during inference. | Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) | |--------------|-------------------------------------------|-------------------------------------------|------------------------------| | 1 | 7 | 6 | 1.17 | | 2 | 8 | 6 | 1.33 | | 4 | 8 | 6 | 1.33 | | 8 | 8 | 6 | 1.33 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#using-scaled-dot-product-attention-sdpa
#using-scaled-dot-product-attention-sdpa
.md
284_3
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT MSN. <PipelineTag pipeline="image-classification"/> - [`ViTMSNForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#resources
#resources
.md
284_4
This is the configuration class to store the configuration of a [`ViTMSNModel`]. It is used to instantiate an ViT MSN model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViT [facebook/vit_msn_base](https://huggingface.co/facebook/vit_msn_base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. Example: ```python >>> from transformers import ViTMSNModel, ViTMSNConfig >>> # Initializing a ViT MSN vit-msn-base style configuration >>> configuration = ViTConfig() >>> # Initializing a model from the vit-msn-base style configuration >>> model = ViTMSNModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnconfig
#vitmsnconfig
.md
284_5
The bare ViTMSN Model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViTMSNConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnmodel
#vitmsnmodel
.md
284_6
ViTMSN Model with an image classification head on top e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViTMSNConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit_msn.md
https://huggingface.co/docs/transformers/en/model_doc/vit_msn/#vitmsnforimageclassification
#vitmsnforimageclassification
.md
284_7
<!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
https://huggingface.co/docs/transformers/en/model_doc/olmoe/
.md
285_0
The OLMoE model was proposed in [OLMoE: Open Mixture-of-Experts Language Models](https://arxiv.org/abs/2409.02060) by Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, Sewon Min, Weijia Shi, Pete Walsh, Oyvind Tafjord, Nathan Lambert, Yuling Gu, Shane Arora, Akshita Bhagia, Dustin Schwenk, David Wadden, Alexander Wettig, Binyuan Hui, Tim Dettmers, Douwe Kiela, Ali Farhadi, Noah A. Smith, Pang Wei Koh, Amanpreet Singh, Hannaneh Hajishirzi. OLMoE is a series of **O**pen **L**anguage **Mo**dels using sparse **M**ixture-**o**f-**E**xperts designed to enable the science of language models. We release all code, checkpoints, logs, and details involved in training these models. The abstract from the paper is the following: *We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models with similar active parameters, even surpassing larger ones like Llama2-13B-Chat and DeepSeekMoE-16B. We present various experiments on MoE training, analyze routing in our model showing high specialization, and open-source all aspects of our work: model weights, training data, code, and logs.* This model was contributed by [Muennighoff](https://hf.co/Muennighoff). The original code can be found [here](https://github.com/allenai/OLMoE).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#overview
#overview
.md
285_1
This is the configuration class to store the configuration of a [`OlmoeModel`]. It is used to instantiate an OLMoE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [allenai/OLMoE-1B-7B-0924](https://huggingface.co/allenai/OLMoE-1B-7B-0924). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50304): Vocabulary size of the OLMoE model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`OlmoeModel`] hidden_size (`int`, *optional*, defaults to 2048): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 2048): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 16): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 4096): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*, defaults to 1): Padding token id. bos_token_id (`int`, *optional*): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 50279): End of stream token id. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update `max_position_embeddings` to the expected new maximum. See the following thread for more information on how these scaling strategies behave: https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an experimental feature, subject to breaking API changes in future versions. attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`): Whether to use a bias in the query, key, value and output projection layers during self-attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. clip_qkv (`float`, *optional*): If not `None`, elements of query, key and value attention states are clipped so that their absolute value does not exceed this value. num_experts_per_tok (`int`, *optional*, defaults to 8): Number of selected experts. num_experts (`int`, *optional*, defaults to 64): Number of routed experts. output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not the router logits should be returned by the model. Enabeling this will also allow the model to output the auxiliary loss, including load balancing loss and router z-loss. router_aux_loss_coef (`float`, *optional*, defaults to 0.01): The aux loss factor for the total loss. norm_topk_prob (`bool`, *optional*, defaults to `False`): Whether to normalize the topk probabilities. ```python >>> from transformers import OlmoeModel, OlmoeConfig >>> # Initializing a OLMoE 7B A1B style configuration >>> configuration = OlmoeConfig() >>> # Initializing a model from the OLMoE 7B A1B style configuration >>> model = OlmoeModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeconfig
#olmoeconfig
.md
285_2
The bare Olmoe Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`OlmoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`OlmoeDecoderLayer`] Args: config: OlmoeConfig Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoemodel
#olmoemodel
.md
285_3
No docstring available for OlmoeForCausalLM Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/olmoe.md
https://huggingface.co/docs/transformers/en/model_doc/olmoe/#olmoeforcausallm
#olmoeforcausallm
.md
285_4
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/
.md
286_0
The OneFormer model was proposed in [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. OneFormer is a universal image segmentation framework that can be trained on a single panoptic dataset to perform semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference. <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_teaser.png"/> The abstract from the paper is the following: *Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.* The figure below illustrates the architecture of OneFormer. Taken from the [original paper](https://arxiv.org/abs/2211.06220). <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_architecture.png"/> This model was contributed by [Jitesh Jain](https://huggingface.co/praeclarumjj3). The original code can be found [here](https://github.com/SHI-Labs/OneFormer).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#overview
#overview
.md
286_1
- OneFormer requires two inputs during inference: *image* and *task token*. - During training, OneFormer only uses panoptic annotations. - If you want to train the model in a distributed environment across multiple nodes, then one should update the `get_num_masks` function inside in the `OneFormerLoss` class of `modeling_oneformer.py`. When training on multiple nodes, this should be set to the average number of target masks across all nodes, as can be seen in the original implementation [here](https://github.com/SHI-Labs/OneFormer/blob/33ebb56ed34f970a30ae103e786c0cb64c653d9a/oneformer/modeling/criterion.py#L287). - One can use [`OneFormerProcessor`] to prepare input images and task inputs for the model and optional targets for the model. [`OneFormerProcessor`] wraps [`OneFormerImageProcessor`] and [`CLIPTokenizer`] into a single instance to both prepare the images and encode the task inputs. - To get the final segmentation, depending on the task, you can call [`~OneFormerProcessor.post_process_semantic_segmentation`] or [`~OneFormerImageProcessor.post_process_instance_segmentation`] or [`~OneFormerImageProcessor.post_process_panoptic_segmentation`]. All three tasks can be solved using [`OneFormerForUniversalSegmentation`] output, panoptic segmentation accepts an optional `label_ids_to_fuse` argument to fuse instances of the target object/s (e.g. sky) together.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#usage-tips
#usage-tips
.md
286_2
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OneFormer. - Demo notebooks regarding inference + fine-tuning on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/OneFormer). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#resources
#resources
.md
286_3
models.oneformer.modeling_oneformer.OneFormerModelOutput Class for outputs of [`OneFormerModel`]. This class returns all the needed hidden states to compute the logits. Args: encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder model at the output of each stage. pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel decoder model at the output of each stage. transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the transformer decoder at the output of each stage. transformer_decoder_object_queries (`torch.FloatTensor` of shape `(batch_size, num_queries, hidden_dim)`) Output object queries from the last layer in the transformer decoder. transformer_decoder_contrastive_queries (`torch.FloatTensor` of shape `(batch_size, num_queries, hidden_dim)`) Contrastive queries from the transformer decoder. transformer_decoder_mask_predictions (`torch.FloatTensor` of shape `(batch_size, num_queries, height, width)`) Mask Predictions from the last layer in the transformer decoder. transformer_decoder_class_predictions (`torch.FloatTensor` of shape `(batch_size, num_queries, num_classes+1)`): Class Predictions from the last layer in the transformer decoder. transformer_decoder_auxiliary_predictions (Tuple of Dict of `str, torch.FloatTensor`, *optional*): Tuple of class and mask predictions from each layer of the transformer decoder. text_queries (`torch.FloatTensor`, *optional* of shape `(batch_size, num_queries, hidden_dim)`) Text queries derived from the input text list used for calculating contrastive loss during training. task_token (`torch.FloatTensor` of shape `(batch_size, hidden_dim)`) 1D task token to condition the queries. attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `tuple(torch.FloatTensor)` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Self and Cross Attentions weights from transformer decoder. models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput Class for outputs of [`OneFormerForUniversalSegmentationOutput`]. This output can be directly passed to [`~OneFormerImageProcessor.post_process_semantic_segmentation`] or [`~OneFormerImageProcessor.post_process_instance_segmentation`] or [`~OneFormerImageProcessor.post_process_panoptic_segmentation`] depending on the task. Please, see [`~OneFormerImageProcessor] for details regarding usage. Args: loss (`torch.Tensor`, *optional*): The computed loss, returned when labels are present. class_queries_logits (`torch.FloatTensor`): A tensor of shape `(batch_size, num_queries, num_labels + 1)` representing the proposed classes for each query. Note the `+ 1` is needed because we incorporate the null class. masks_queries_logits (`torch.FloatTensor`): A tensor of shape `(batch_size, num_queries, height, width)` representing the proposed masks for each query. auxiliary_predictions (List of Dict of `str, torch.FloatTensor`, *optional*): List of class and mask predictions from each layer of the transformer decoder. encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder model at the output of each stage. pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel decoder model at the output of each stage. transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the transformer decoder at the output of each stage. transformer_decoder_object_queries (`torch.FloatTensor` of shape `(batch_size, num_queries, hidden_dim)`) Output object queries from the last layer in the transformer decoder. transformer_decoder_contrastive_queries (`torch.FloatTensor` of shape `(batch_size, num_queries, hidden_dim)`) Contrastive queries from the transformer decoder. transformer_decoder_mask_predictions (`torch.FloatTensor` of shape `(batch_size, num_queries, height, width)`) Mask Predictions from the last layer in the transformer decoder. transformer_decoder_class_predictions (`torch.FloatTensor` of shape `(batch_size, num_queries, num_classes+1)`): Class Predictions from the last layer in the transformer decoder. transformer_decoder_auxiliary_predictions (List of Dict of `str, torch.FloatTensor`, *optional*): List of class and mask predictions from each layer of the transformer decoder. text_queries (`torch.FloatTensor`, *optional* of shape `(batch_size, num_queries, hidden_dim)`) Text queries derived from the input text list used for calculating contrastive loss during training. task_token (`torch.FloatTensor` of shape `(batch_size, hidden_dim)`) 1D task token to condition the queries. attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `tuple(torch.FloatTensor)` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Self and Cross Attentions weights from transformer decoder.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformer-specific-outputs
#oneformer-specific-outputs
.md
286_4
This is the configuration class to store the configuration of a [`OneFormerModel`]. It is used to instantiate a OneFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the OneFormer [shi-labs/oneformer_ade20k_swin_tiny](https://huggingface.co/shi-labs/oneformer_ade20k_swin_tiny) architecture trained on [ADE20k-150](https://huggingface.co/datasets/scene_parse_150). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: backbone_config (`PretrainedConfig`, *optional*, defaults to `SwinConfig`): The configuration of the backbone model. backbone (`str`, *optional*): Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone` is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights. use_pretrained_backbone (`bool`, *optional*, defaults to `False`): Whether to use pretrained weights for the backbone. use_timm_backbone (`bool`, *optional*, defaults to `False`): Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers library. backbone_kwargs (`dict`, *optional*): Keyword arguments to be passed to AutoBackbone when loading from a checkpoint e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set. ignore_value (`int`, *optional*, defaults to 255): Values to be ignored in GT label while calculating loss. num_queries (`int`, *optional*, defaults to 150): Number of object queries. no_object_weight (`float`, *optional*, defaults to 0.1): Weight for no-object class predictions. class_weight (`float`, *optional*, defaults to 2.0): Weight for Classification CE loss. mask_weight (`float`, *optional*, defaults to 5.0): Weight for binary CE loss. dice_weight (`float`, *optional*, defaults to 5.0): Weight for dice loss. contrastive_weight (`float`, *optional*, defaults to 0.5): Weight for contrastive loss. contrastive_temperature (`float`, *optional*, defaults to 0.07): Initial value for scaling the contrastive logits. train_num_points (`int`, *optional*, defaults to 12544): Number of points to sample while calculating losses on mask predictions. oversample_ratio (`float`, *optional*, defaults to 3.0): Ratio to decide how many points to oversample. importance_sample_ratio (`float`, *optional*, defaults to 0.75): Ratio of points that are sampled via importance sampling. init_std (`float`, *optional*, defaults to 0.02): Standard deviation for normal intialization. init_xavier_std (`float`, *optional*, defaults to 1.0): Standard deviation for xavier uniform initialization. layer_norm_eps (`float`, *optional*, defaults to 1e-05): Epsilon for layer normalization. is_training (`bool`, *optional*, defaults to `False`): Whether to run in training or inference mode. use_auxiliary_loss (`bool`, *optional*, defaults to `True`): Whether to calculate loss using intermediate predictions from transformer decoder. output_auxiliary_logits (`bool`, *optional*, defaults to `True`): Whether to return intermediate predictions from transformer decoder. strides (`list`, *optional*, defaults to `[4, 8, 16, 32]`): List containing the strides for feature maps in the encoder. task_seq_len (`int`, *optional*, defaults to 77): Sequence length for tokenizing text list input. text_encoder_width (`int`, *optional*, defaults to 256): Hidden size for text encoder. text_encoder_context_length (`int`, *optional*, defaults to 77): Input sequence length for text encoder. text_encoder_num_layers (`int`, *optional*, defaults to 6): Number of layers for transformer in text encoder. text_encoder_vocab_size (`int`, *optional*, defaults to 49408): Vocabulary size for tokenizer. text_encoder_proj_layers (`int`, *optional*, defaults to 2): Number of layers in MLP for project text queries. text_encoder_n_ctx (`int`, *optional*, defaults to 16): Number of learnable text context queries. conv_dim (`int`, *optional*, defaults to 256): Feature map dimension to map outputs from the backbone. mask_dim (`int`, *optional*, defaults to 256): Dimension for feature maps in pixel decoder. hidden_dim (`int`, *optional*, defaults to 256): Dimension for hidden states in transformer decoder. encoder_feedforward_dim (`int`, *optional*, defaults to 1024): Dimension for FFN layer in pixel decoder. norm (`str`, *optional*, defaults to `"GN"`): Type of normalization. encoder_layers (`int`, *optional*, defaults to 6): Number of layers in pixel decoder. decoder_layers (`int`, *optional*, defaults to 10): Number of layers in transformer decoder. use_task_norm (`bool`, *optional*, defaults to `True`): Whether to normalize the task token. num_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads in transformer layers in the pixel and transformer decoders. dropout (`float`, *optional*, defaults to 0.1): Dropout probability for pixel and transformer decoders. dim_feedforward (`int`, *optional*, defaults to 2048): Dimension for FFN layer in transformer decoder. pre_norm (`bool`, *optional*, defaults to `False`): Whether to normalize hidden states before attention layers in transformer decoder. enforce_input_proj (`bool`, *optional*, defaults to `False`): Whether to project hidden states in transformer decoder. query_dec_layers (`int`, *optional*, defaults to 2): Number of layers in query transformer. common_stride (`int`, *optional*, defaults to 4): Common stride used for features in pixel decoder. Examples: ```python >>> from transformers import OneFormerConfig, OneFormerModel >>> # Initializing a OneFormer shi-labs/oneformer_ade20k_swin_tiny configuration >>> configuration = OneFormerConfig() >>> # Initializing a model (with random weights) from the shi-labs/oneformer_ade20k_swin_tiny style configuration >>> model = OneFormerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerconfig
#oneformerconfig
.md
286_5
Constructs a OneFormer image processor. The image processor can be used to prepare image(s), task input(s) and optional text inputs and targets for the model. This image processor inherits from [`BaseImageProcessor`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the input to a certain `size`. size (`int`, *optional*, defaults to 800): Resize the input to the given size. Only has an effect if `do_resize` is set to `True`. If size is a sequence like `(width, height)`, output size will be matched to this. If size is an int, smaller edge of the image will be matched to this number. i.e, if `height > width`, then image will be rescaled to `(size * height / width, size)`. resample (`int`, *optional*, defaults to `Resampling.BILINEAR`): An optional resampling filter. This can be one of `PIL.Image.Resampling.NEAREST`, `PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`, `PIL.Image.Resampling.BICUBIC` or `PIL.Image.Resampling.LANCZOS`. Only has an effect if `do_resize` is set to `True`. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the input to a certain `scale`. rescale_factor (`float`, *optional*, defaults to `1/ 255`): Rescale the input by the given factor. Only has an effect if `do_rescale` is set to `True`. do_normalize (`bool`, *optional*, defaults to `True`): Whether or not to normalize the input with mean and standard deviation. image_mean (`int`, *optional*, defaults to `[0.485, 0.456, 0.406]`): The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean. image_std (`int`, *optional*, defaults to `[0.229, 0.224, 0.225]`): The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the ImageNet std. ignore_index (`int`, *optional*): Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels denoted with 0 (background) will be replaced with `ignore_index`. do_reduce_labels (`bool`, *optional*, defaults to `False`): Whether or not to decrement all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by `ignore_index`. repo_path (`str`, *optional*, defaults to `"shi-labs/oneformer_demo"`): Path to hub repo or local directory containing the JSON file with class information for the dataset. If unset, will look for `class_info_file` in the current working directory. class_info_file (`str`, *optional*): JSON file containing class information for the dataset. See `shi-labs/oneformer_demo/cityscapes_panoptic.json` for an example. num_text (`int`, *optional*): Number of text entries in the text input list. num_labels (`int`, *optional*): The number of labels in the segmentation map. Methods: preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerimageprocessor
#oneformerimageprocessor
.md
286_6
Constructs an OneFormer processor which wraps [`OneFormerImageProcessor`] and [`CLIPTokenizer`]/[`CLIPTokenizerFast`] into a single processor that inherits both the image processor and tokenizer functionalities. Args: image_processor ([`OneFormerImageProcessor`]): The image processor is a required input. tokenizer ([`CLIPTokenizer`, `CLIPTokenizerFast`]): The tokenizer is a required input. max_seq_len (`int`, *optional*, defaults to 77)): Sequence length for input text list. task_seq_len (`int`, *optional*, defaults to 77): Sequence length for input task token.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerprocessor
#oneformerprocessor
.md
286_7
The bare OneFormer Model outputting raw hidden-states without any specific head on top. This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`OneFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformermodel
#oneformermodel
.md
286_8
OneFormer Model for instance, semantic and panoptic image segmentation. This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`OneFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/oneformer.md
https://huggingface.co/docs/transformers/en/model_doc/oneformer/#oneformerforuniversalsegmentation
#oneformerforuniversalsegmentation
.md
286_9
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
https://huggingface.co/docs/transformers/en/model_doc/sew/
.md
287_0
SEW (Squeezed and Efficient Wav2Vec) was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. The abstract from the paper is the following: *This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.* This model was contributed by [anton-l](https://huggingface.co/anton-l).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
https://huggingface.co/docs/transformers/en/model_doc/sew/#overview
#overview
.md
287_1
- SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`].
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
https://huggingface.co/docs/transformers/en/model_doc/sew/#usage-tips
#usage-tips
.md
287_2
- [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
https://huggingface.co/docs/transformers/en/model_doc/sew/#resources
#resources
.md
287_3
This is the configuration class to store the configuration of a [`SEWModel`]. It is used to instantiate a SEW model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SEW [asapp/sew-tiny-100k](https://huggingface.co/asapp/sew-tiny-100k) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32): Vocabulary size of the SEW model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`SEW`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. squeeze_factor (`int`, *optional*, defaults to 2): Sequence length downsampling factor after the encoder and upsampling factor after the transformer. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. activation_dropout (`float`, *optional*, defaults to 0.1): The dropout ratio for activations inside the fully connected layer. attention_dropout (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. final_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the final projection layer of [`SEWForCTC`]. layerdrop (`float`, *optional*, defaults to 0.1): The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. feat_extract_norm (`str`, *optional*, defaults to `"group"`): The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D convolutional layers. feat_proj_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for output of the feature encoder. feat_extract_activation (`str, `optional`, defaults to `"gelu"`): The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)`): A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers. conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)`): A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)`): A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of *conv_kernel* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_bias (`bool`, *optional*, defaults to `False`): Whether the 1D convolutional layers have a bias. num_conv_pos_embeddings (`int`, *optional*, defaults to 128): Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16): Number of groups of 1D convolutional positional embeddings layer. apply_spec_augment (`bool`, *optional*, defaults to `True`): Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779). mask_time_prob (`float`, *optional*, defaults to 0.05): Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_time_length (`int`, *optional*, defaults to 10): Length of vector span along the time axis. mask_time_min_masks (`int`, *optional*, defaults to 2),: The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks'' mask_feature_prob (`float`, *optional*, defaults to 0.0): Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_feature_length (`int`, *optional*, defaults to 10): Length of vector span along the feature axis. mask_feature_min_masks (`int`, *optional*, defaults to 0),: The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks'' ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`): Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an instance of [`SEWForCTC`]. ctc_zero_infinity (`bool`, *optional*, defaults to `False`): Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [`SEWForCTC`]. use_weighted_layer_sum (`bool`, *optional*, defaults to `False`): Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of [`Wav2Vec2ForSequenceClassification`]. classifier_proj_size (`int`, *optional*, defaults to 256): Dimensionality of the projection before token mean-pooling for classification. Example: ```python >>> from transformers import SEWConfig, SEWModel >>> # Initializing a SEW asapp/sew-tiny-100k style configuration >>> configuration = SEWConfig() >>> # Initializing a model (with random weights) from the asapp/sew-tiny-100k style configuration >>> model = SEWModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewconfig
#sewconfig
.md
287_4
The bare SEW Model transformer outputting raw hidden-states without any specific head on top. SEW was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SEWConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewmodel
#sewmodel
.md
287_5
SEW Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). SEW was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SEWConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewforctc
#sewforctc
.md
287_6
SEW Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. SEW was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SEWConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/sew.md
https://huggingface.co/docs/transformers/en/model_doc/sew/#sewforsequenceclassification
#sewforsequenceclassification
.md
287_7
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/
.md
288_0
The AltCLIP model was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679v2) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. AltCLIP (Altering the Language Encoder in CLIP) is a neural network trained on a variety of image-text and text-text pairs. By switching CLIP's text encoder with a pretrained multilingual text encoder XLM-R, we could obtain very close performances with CLIP on almost all tasks, and extended original CLIP's capabilities such as multilingual understanding. The abstract from the paper is the following: *In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.* This model was contributed by [jongjyh](https://huggingface.co/jongjyh).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/#overview
#overview
.md
288_1
The usage of AltCLIP is very similar to the CLIP. the difference between CLIP is the text encoder. Note that we use bidirectional attention instead of casual attention and we take the [CLS] token in XLM-R to represent text embedding. AltCLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score. To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The [`CLIPImageProcessor`] can be used to resize (or rescale) and normalize images for the model. The [`AltCLIPProcessor`] wraps a [`CLIPImageProcessor`] and a [`XLMRobertaTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using [`AltCLIPProcessor`] and [`AltCLIPModel`]. ```python >>> from PIL import Image >>> import requests >>> from transformers import AltCLIPModel, AltCLIPProcessor >>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP") >>> processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` <Tip> This model is based on `CLIPModel`, use it like you would use the original [CLIP](clip). </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/#usage-tips-and-example
#usage-tips-and-example
.md
288_2
This is the configuration class to store the configuration of a [`AltCLIPModel`]. It is used to instantiate an AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the AltCLIP [BAAI/AltCLIP](https://huggingface.co/BAAI/AltCLIP) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: text_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`AltCLIPTextConfig`]. vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`AltCLIPVisionConfig`]. projection_dim (`int`, *optional*, defaults to 768): Dimensionality of text and vision projection layers. logit_scale_init_value (`float`, *optional*, defaults to 2.6592): The initial value of the *logit_scale* parameter. Default is used as per the original CLIP implementation. kwargs (*optional*): Dictionary of keyword arguments. Example: ```python >>> from transformers import AltCLIPConfig, AltCLIPModel >>> # Initializing a AltCLIPConfig with BAAI/AltCLIP style configuration >>> configuration = AltCLIPConfig() >>> # Initializing a AltCLIPModel (with random weights) from the BAAI/AltCLIP style configuration >>> model = AltCLIPModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config >>> # We can also initialize a AltCLIPConfig from a AltCLIPTextConfig and a AltCLIPVisionConfig >>> # Initializing a AltCLIPText and AltCLIPVision configuration >>> config_text = AltCLIPTextConfig() >>> config_vision = AltCLIPVisionConfig() >>> config = AltCLIPConfig.from_text_vision_configs(config_text, config_vision) ``` Methods: from_text_vision_configs
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipconfig
#altclipconfig
.md
288_3
This is the configuration class to store the configuration of a [`AltCLIPTextModel`]. It is used to instantiate a AltCLIP text model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the AltCLIP [BAAI/AltCLIP](https://huggingface.co/BAAI/AltCLIP) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 250002): Vocabulary size of the AltCLIP model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`AltCLIPTextModel`]. hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 514): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 1): The vocabulary size of the `token_type_ids` passed when calling [`AltCLIPTextModel`] initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float`, *optional*, defaults to 0.02): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 1): The id of the *padding* token. bos_token_id (`int`, *optional*, defaults to 0): The id of the *beginning-of-sequence* token. eos_token_id (`Union[int, List[int]]`, *optional*, defaults to 2): The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. project_dim (`int`, *optional*, defaults to 768): The dimensions of the teacher model before the mapping layer. Examples: ```python >>> from transformers import AltCLIPTextModel, AltCLIPTextConfig >>> # Initializing a AltCLIPTextConfig with BAAI/AltCLIP style configuration >>> configuration = AltCLIPTextConfig() >>> # Initializing a AltCLIPTextModel (with random weights) from the BAAI/AltCLIP style configuration >>> model = AltCLIPTextModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextconfig
#altcliptextconfig
.md
288_4
This is the configuration class to store the configuration of a [`AltCLIPModel`]. It is used to instantiate an AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the AltCLIP [BAAI/AltCLIP](https://huggingface.co/BAAI/AltCLIP) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. projection_dim (`int`, *optional*, defaults to 512): Dimensionality of text and vision projection layers. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. num_channels (`int`, *optional*, defaults to 3): The number of input channels. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 32): The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float`, *optional*, defaults to 1.0): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). Example: ```python >>> from transformers import AltCLIPVisionConfig, AltCLIPVisionModel >>> # Initializing a AltCLIPVisionConfig with BAAI/AltCLIP style configuration >>> configuration = AltCLIPVisionConfig() >>> # Initializing a AltCLIPVisionModel (with random weights) from the BAAI/AltCLIP style configuration >>> model = AltCLIPVisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionconfig
#altclipvisionconfig
.md
288_5
Constructs a AltCLIP processor which wraps a CLIP image processor and a XLM-Roberta tokenizer into a single processor. [`AltCLIPProcessor`] offers all the functionalities of [`CLIPImageProcessor`] and [`XLMRobertaTokenizerFast`]. See the [`~AltCLIPProcessor.__call__`] and [`~AltCLIPProcessor.decode`] for more information. Args: image_processor ([`CLIPImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`XLMRobertaTokenizerFast`], *optional*): The tokenizer is a required input.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipprocessor
#altclipprocessor
.md
288_6
No docstring available for AltCLIPModel Methods: forward - get_text_features - get_image_features
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipmodel
#altclipmodel
.md
288_7
No docstring available for AltCLIPTextModel Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altcliptextmodel
#altcliptextmodel
.md
288_8
No docstring available for AltCLIPVisionModel Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/altclip.md
https://huggingface.co/docs/transformers/en/model_doc/altclip/#altclipvisionmodel
#altclipvisionmodel
.md
288_9
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/
.md
289_0
The PaliGemma model was proposed in [PaliGemma – Google's Cutting-Edge Open Vision Language Model](https://huggingface.co/blog/paligemma) by Google. It is a 3B vision-language model composed by a [SigLIP](siglip) vision encoder and a [Gemma](gemma) language decoder linked by a multimodal linear projection. It cuts an image into a fixed number of VIT tokens and prepends it to an optional prompt. One particularity is that the model uses full block attention on all the image tokens plus the input text tokens. It comes in 3 resolutions, 224x224, 448x448 and 896x896 with 3 base models, with 55 fine-tuned versions for different tasks, and 2 mix models. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/paligemma/paligemma_arch.png" alt="drawing" width="600"/> <small> PaliGemma architecture. Taken from the <a href="https://huggingface.co/blog/paligemma">blog post.</a> </small> This model was contributed by [Molbap](https://huggingface.co/Molbap).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#overview
#overview
.md
289_1
- PaliGemma is not meant for conversational use, and it works best when fine-tuning to a specific use case. Some downstream tasks on which PaliGemma can be fine-tuned include image captioning, visual question answering (VQA), object detection, referring expression segmentation and document understanding. - One can use `PaliGemmaProcessor` to prepare images, text and optional labels for the model. When fine-tuning a PaliGemma model, the `suffix` argument can be passed to the processor which creates the `labels` for the model: ```python prompt = "What is on the flower?" answer = "a bee" inputs = processor(images=raw_image, text=prompt, suffix=answer, return_tensors="pt") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#usage-tips
#usage-tips
.md
289_2
The model can accept a single or multiple images. According to the [paper](https://arxiv.org/abs/2407.07726v1), the checkpoint PaliGemma can transfer to tasks which take multiple images as input. NLVR2 is one such task, which asks one question about two images, and requires looking at both to give the correct answer. Here's an example code for single and multi image inference.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#usage-example
#usage-example
.md
289_3
```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration model_id = "google/paligemma-3b-mix-224" model = PaliGemmaForConditionalGeneration.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) prompt = "What is on the flower?" image_file = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg?download=true" raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(raw_image, prompt, return_tensors="pt") output = model.generate(**inputs, max_new_tokens=20) print(processor.decode(output[0], skip_special_tokens=True)[inputs.input_ids.shape[1]: ]) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#single-image-inference
#single-image-inference
.md
289_4
```python model_id = "google/paligemma-3b-ft-nlvr2-448" # checkpoint tuned for multiple images model = PaliGemmaForConditionalGeneration.from_pretrained(model_id) processor = PaliGemmaProcessor.from_pretrained(model_id) prompt = "answer en Which of the two pictures shows a snowman, first or second?" stop_sign_image = Image.open( requests.get("https://www.ilankelman.org/stopsigns/australia.jpg", stream=True).raw ) snow_image = Image.open( requests.get( "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg", stream=True ).raw ) inputs = processor(images=[[snow_image, stop_sign_image]], text=prompt, return_tensors="pt") output = model.generate(**inputs, max_new_tokens=20) print(processor.decode(output[0], skip_special_tokens=True)[inputs.input_ids.shape[1]: ]) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#multi-image-inference
#multi-image-inference
.md
289_5
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with PaliGemma. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post introducing all the features of PaliGemma can be found [here](https://huggingface.co/blog/paligemma). - Demo notebooks on how to fine-tune PaliGemma for VQA with the Trainer API along with inference can be found [here](https://github.com/huggingface/notebooks/tree/main/examples/paligemma). - Demo notebooks on how to fine-tune PaliGemma on a custom dataset (receipt image -> JSON) along with inference can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/PaliGemma). 🌎
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#resources
#resources
.md
289_6
This is the configuration class to store the configuration of a [`PaliGemmaForConditionalGeneration`]. It is used to instantiate an PaliGemmamodel according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the PaliGemma-2B. e.g. [paligemma-hf/paligemma-2b](https://huggingface.co/paligemma-hf/paligemma-2b) Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vision_config (`PaliGemmaVisionConfig`, *optional*): Custom vision config or dict text_config (`Union[AutoConfig, dict]`, *optional*): The config object of the text backbone. Can be any of `LlamaConfig` or `MistralConfig`. ignore_index (`int`, *optional*, defaults to -100): The ignore index for the loss function. image_token_index (`int`, *optional*, defaults to 256000): The image token index to encode the image prompt. vocab_size (`int`, *optional*, defaults to 257152): Vocabulary size of the PaliGemmamodel. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`~PaliGemmaForConditionalGeneration`] projection_dim (`int`, *optional*, defaults to 2048): Dimension of the multimodal projection space. hidden_size (`int`, *optional*, defaults to 2048): Dimension of the hidden layer of the Language model. Example: ```python >>> from transformers import PaliGemmaForConditionalGeneration, PaliGemmaConfig, SiglipVisionConfig, GemmaConfig >>> # Initializing a Siglip-like vision config >>> vision_config = SiglipVisionConfig() >>> # Initializing a PaliGemma config >>> text_config = GemmaConfig() >>> # Initializing a PaliGemma paligemma-3b-224 style configuration >>> configuration = PaliGemmaConfig(vision_config, text_config) >>> # Initializing a model from the paligemma-3b-224 style configuration >>> model = PaliGemmaForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaconfig
#paligemmaconfig
.md
289_7
Constructs a PaliGemma processor which wraps a PaliGemma image processor and a PaliGemma tokenizer into a single processor. [`PaliGemmaProcessor`] offers all the functionalities of [`SiglipImageProcessor`] and [`GemmaTokenizerFast`]. See the [`~PaliGemmaProcessor.__call__`] and [`~PaliGemmaProcessor.decode`] for more information. Args: image_processor ([`SiglipImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`GemmaTokenizerFast`], *optional*): The tokenizer is a required input. chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaprocessor
#paligemmaprocessor
.md
289_8
The PALIGEMMA model which consists of a vision backbone and a language model. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PaliGemmaConfig`] or [`PaliGemmaVisionConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/paligemma.md
https://huggingface.co/docs/transformers/en/model_doc/paligemma/#paligemmaforconditionalgeneration
#paligemmaforconditionalgeneration
.md
289_9
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/
.md
290_0
The [`EncoderDecoderModel`] can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. After such an [`EncoderDecoderModel`] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). An application of this architecture could be to leverage two pretrained [`BertModel`] as the encoder and decoder for a summarization model as was shown in: [Text Summarization with Pretrained Encoders](https://arxiv.org/abs/1908.08345) by Yang Liu and Mirella Lapata.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#overview
#overview
.md
290_1
[`EncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`BertModel`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. ```python >>> from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel >>> config_encoder = BertConfig() >>> config_decoder = BertConfig() >>> config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = EncoderDecoderModel(config=config) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#randomly-initializing-encoderdecodermodel-from-model-configurations
#randomly-initializing-encoderdecodermodel-from-model-configurations
.md
290_2
[`EncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, *e.g.* BERT, can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [`EncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `EncoderDecoderModel` class provides a [`EncoderDecoderModel.from_encoder_decoder_pretrained`] method. ```python >>> from transformers import EncoderDecoderModel, BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#initialising-encoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
#initialising-encoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
.md
290_3
To load fine-tuned checkpoints of the `EncoderDecoderModel` class, [`EncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers. To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. ```python >>> from transformers import AutoTokenizer, EncoderDecoderModel >>> # load a fine-tuned seq2seq model and corresponding tokenizer >>> model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") >>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") >>> # let's perform inference on a long piece of text >>> ARTICLE_TO_SUMMARIZE = ( ... "PG&E stated it scheduled the blackouts in response to forecasts for high winds " ... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " ... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." ... ) >>> input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors="pt").input_ids >>> # autoregressively generate summary (uses greedy decoding by default) >>> generated_ids = model.generate(input_ids) >>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow. ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#loading-an-existing-encoderdecodermodel-checkpoint-and-perform-inference
#loading-an-existing-encoderdecodermodel-checkpoint-and-perform-inference
.md
290_4
[`TFEncoderDecoderModel.from_pretrained`] currently doesn't support initializing the model from a pytorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only pytorch checkpoints for a particular encoder-decoder model, a workaround is: ```python >>> # a workaround to load from pytorch checkpoint >>> from transformers import EncoderDecoderModel, TFEncoderDecoderModel >>> _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") >>> _model.encoder.save_pretrained("./encoder") >>> _model.decoder.save_pretrained("./decoder") >>> model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( ... "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ... ) >>> # This is only for copying some specific attributes of this particular model. >>> model.config = _model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#loading-a-pytorch-checkpoint-into-tfencoderdecodermodel
#loading-a-pytorch-checkpoint-into-tfencoderdecodermodel
.md
290_5
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model. As you can see, only 2 inputs are required for the model in order to compute a loss: `input_ids` (which are the `input_ids` of the encoded input sequence) and `labels` (which are the `input_ids` of the encoded target sequence). ```python >>> from transformers import BertTokenizer, EncoderDecoderModel >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased") >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> input_ids = tokenizer( ... "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side.During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft).Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.", ... return_tensors="pt", ... ).input_ids >>> labels = tokenizer( ... "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2 metres ( 17 ft ) and is the second tallest free - standing structure in paris.", ... return_tensors="pt", ... ).input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(input_ids=input_ids, labels=labels).loss ``` Detailed [colab](https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing#scrollTo=ZwQIEhKOrJpl) for training. This model was contributed by [thomwolf](https://github.com/thomwolf). This model's TensorFlow and Flax versions were contributed by [ydshieh](https://github.com/ydshieh).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#training
#training
.md
290_6
[`EncoderDecoderConfig`] is the configuration class to store the configuration of a [`EncoderDecoderModel`]. It is used to instantiate an Encoder Decoder model according to the specified arguments, defining the encoder and decoder configs. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: kwargs (*optional*): Dictionary of keyword arguments. Notably: - **encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the encoder config. - **decoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the decoder config. Examples: ```python >>> from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel >>> # Initializing a BERT google-bert/bert-base-uncased style configuration >>> config_encoder = BertConfig() >>> config_decoder = BertConfig() >>> config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> # Initializing a Bert2Bert model (with random weights) from the google-bert/bert-base-uncased style configurations >>> model = EncoderDecoderModel(config=config) >>> # Accessing the model configuration >>> config_encoder = model.config.encoder >>> config_decoder = model.config.decoder >>> # set decoder config to causal lm >>> config_decoder.is_decoder = True >>> config_decoder.add_cross_attention = True >>> # Saving the model, including its configuration >>> model.save_pretrained("my-model") >>> # loading model and config from pretrained folder >>> encoder_decoder_config = EncoderDecoderConfig.from_pretrained("my-model") >>> model = EncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config) ``` <frameworkcontent> <pt>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecoderconfig
#encoderdecoderconfig
.md
290_7
This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via [`~AutoModel.from_pretrained`] function and the decoder is loaded via [`~AutoModelForCausalLM.from_pretrained`] function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`EncoderDecoderConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. [`EncoderDecoderModel`] is a generic model class that will be instantiated as a transformer architecture with one of the base model classes of the library as encoder and another one as decoder when created with the :meth*~transformers.AutoModel.from_pretrained* class method for the encoder and :meth*~transformers.AutoModelForCausalLM.from_pretrained* class method for the decoder. Methods: forward - from_encoder_decoder_pretrained </pt> <tf>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#encoderdecodermodel
#encoderdecodermodel
.md
290_8
No docstring available for TFEncoderDecoderModel Methods: call - from_encoder_decoder_pretrained </tf> <jax>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#tfencoderdecodermodel
#tfencoderdecodermodel
.md
290_9
No docstring available for FlaxEncoderDecoderModel Methods: __call__ - from_encoder_decoder_pretrained </jax> </frameworkcontent>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/encoder-decoder.md
https://huggingface.co/docs/transformers/en/model_doc/encoder-decoder/#flaxencoderdecodermodel
#flaxencoderdecodermodel
.md
290_10
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
https://huggingface.co/docs/transformers/en/model_doc/colpali/
.md
291_0
The *ColPali* model was proposed in [ColPali: Efficient Document Retrieval with Vision Language Models](https://doi.org/10.48550/arXiv.2407.01449) by **Manuel Faysse***, **Hugues Sibille***, **Tony Wu***, Bilel Omrani, Gautier Viaud, Céline Hudelot, Pierre Colombo (* denotes equal contribution). Work lead by ILLUIN Technology. In our proposed *ColPali* approach, we leverage VLMs to construct efficient multi-vector embeddings directly from document images (“screenshots”) for document retrieval. We train the model to maximize the similarity between these document embeddings and the corresponding query embeddings, using the late interaction method introduced in ColBERT. Using *ColPali* removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, etc.) of a document.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
https://huggingface.co/docs/transformers/en/model_doc/colpali/#overview
#overview
.md
291_1
- The *ColPali* arXiv paper can be found [here](https://doi.org/10.48550/arXiv.2407.01449). 📄 - The official blog post detailing ColPali can be found [here](https://huggingface.co/blog/manu/colpali). 📝 - The original model implementation code for the ColPali model and for the `colpali-engine` package can be found [here](https://github.com/illuin-tech/colpali). 🌎 - Cookbooks for learning to use the transformers-native version of *ColPali*, fine-tuning, and similarity maps generation can be found [here](https://github.com/tonywu71/colpali-cookbooks). 📚 This model was contributed by [@tonywu71](https://huggingface.co/tonywu71) and [@yonigozlan](https://huggingface.co/yonigozlan).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
https://huggingface.co/docs/transformers/en/model_doc/colpali/#resources
#resources
.md
291_2
This example demonstrates how to use *ColPali* to embed both queries and images, calculate their similarity scores, and identify the most relevant matches. For a specific query, you can retrieve the top-k most similar images by selecting the ones with the highest similarity scores. ```python import torch from PIL import Image from transformers import ColPaliForRetrieval, ColPaliProcessor model_name = "vidore/colpali-v1.2-hf" model = ColPaliForRetrieval.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="cuda:0", # or "mps" if on Apple Silicon ).eval() processor = ColPaliProcessor.from_pretrained(model_name) # Your inputs (replace dummy images with screenshots of your documents) images = [ Image.new("RGB", (32, 32), color="white"), Image.new("RGB", (16, 16), color="black"), ] queries = [ "What is the organizational structure for our R&D department?", "Can you provide a breakdown of last year’s financial performance?", ] # Process the inputs batch_images = processor(images=images).to(model.device) batch_queries = processor(text=queries).to(model.device) # Forward pass with torch.no_grad(): image_embeddings = model(**batch_images).embeddings query_embeddings = model(**batch_queries).embeddings # Score the queries against the images scores = processor.score_retrieval(query_embeddings, image_embeddings) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
https://huggingface.co/docs/transformers/en/model_doc/colpali/#usage
#usage
.md
291_3
Configuration class to store the configuration of a [`ColPaliForRetrieval`]. It is used to instantiate an instance of `ColPaliForRetrieval` according to the specified arguments, defining the model architecture following the methodology from the "ColPali: Efficient Document Retrieval with Vision Language Models" paper. Creating a configuration with the default settings will result in a configuration where the VLM backbone is set to the default PaliGemma configuration, i.e the one from [vidore/colpali-v1.2](https://huggingface.co/vidore/colpali-v1.2). The ColPali config is very similar to [`PaligemmaConfig`], but with an extra attribute defining the embedding dimension. Note that contrarily to what the class name suggests (actually the name refers to the ColPali **methodology**), you can use a different VLM backbone model than PaliGemma by passing the corresponding VLM configuration to the class constructor. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vlm_config (`PretrainedConfig`, *optional*): Configuration of the VLM backbone model. text_config (`PretrainedConfig`, *optional*): Configuration of the text backbone model. Overrides the `text_config` attribute of the `vlm_config` if provided. embedding_dim (`int`, *optional*, defaults to 128): Dimension of the multi-vector embeddings produced by the model. Example: ```python from transformers.models.colpali import ColPaliConfig, ColPaliForRetrieval config = ColPaliConfig() model = ColPaliForRetrieval(config) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliconfig
#colpaliconfig
.md
291_4
Constructs a ColPali processor which wraps a PaliGemmaProcessor and special methods to process images and queries, as well as to compute the late-interaction retrieval score. [`ColPaliProcessor`] offers all the functionalities of [`PaliGemmaProcessor`]. See the [`~PaliGemmaProcessor.__call__`] for more information. Args: image_processor ([`SiglipImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`LlamaTokenizerFast`], *optional*): The tokenizer is a required input. chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliprocessor
#colpaliprocessor
.md
291_5
In our proposed ColPali approach, we leverage VLMs to construct efficient multi-vector embeddings directly from document images (“screenshots”) for document retrieval. We train the model to maximize the similarity between these document embeddings and the corresponding query embeddings, using the late interaction method introduced in ColBERT. Using ColPali removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, etc.) of a document. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/colpali.md
https://huggingface.co/docs/transformers/en/model_doc/colpali/#colpaliforretrieval
#colpaliforretrieval
.md
291_6
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
https://huggingface.co/docs/transformers/en/model_doc/siglip/
.md
292_0
The SigLIP model was proposed in [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer. SigLIP proposes to replace the loss function used in [CLIP](clip) by a simple pairwise sigmoid loss. This results in better performance in terms of zero-shot classification accuracy on ImageNet. The abstract from the paper is the following: *We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP). Unlike standard contrastive learning with softmax normalization, the sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. The sigmoid loss simultaneously allows further scaling up the batch size, while also performing better at smaller batch sizes. Combined with Locked-image Tuning, with only four TPUv4 chips, we train a SigLiT model that achieves 84.5% ImageNet zero-shot accuracy in two days. The disentanglement of the batch size from the loss further allows us to study the impact of examples vs pairs and negative to positive ratio. Finally, we push the batch size to the extreme, up to one million, and find that the benefits of growing batch size quickly diminish, with a more reasonable batch size of 32k being sufficient.*
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
https://huggingface.co/docs/transformers/en/model_doc/siglip/#overview
#overview
.md
292_1
- Usage of SigLIP is similar to [CLIP](clip). The main difference is the training loss, which does not require a global view of all the pairwise similarities of images and texts within a batch. One needs to apply the sigmoid activation function to the logits, rather than the softmax. - Training is supported but does not use `torch.distributed` utilities which may limit the scalability of batch size. However, DDP and FDSP works on single-node multi-gpu setup. - When using the standalone [`SiglipTokenizer`] or [`SiglipProcessor`], make sure to pass `padding="max_length"` as that's how the model was trained. - To get the same results as the pipeline, a prompt template of "This is a photo of {label}." should be used. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/siglip_table.jpeg" alt="drawing" width="600"/> <small> SigLIP evaluation results compared to CLIP. Taken from the <a href="https://arxiv.org/abs/2303.15343">original paper</a>.</small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/big_vision/tree/main).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/siglip.md
https://huggingface.co/docs/transformers/en/model_doc/siglip/#usage-tips
#usage-tips
.md
292_2