text
stringlengths
5
58.6k
source
stringclasses
470 values
url
stringlengths
49
167
source_section
stringlengths
0
90
file_type
stringclasses
1 value
id
stringlengths
3
6
Cvt Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`CvtConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#cvtforimageclassification
#cvtforimageclassification
.md
401_6
No docstring available for TFCvtModel Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#tfcvtmodel
#tfcvtmodel
.md
401_7
No docstring available for TFCvtForImageClassification Methods: call </tf> </frameworkcontent>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/cvt.md
https://huggingface.co/docs/transformers/en/model_doc/cvt/#tfcvtforimageclassification
#tfcvtforimageclassification
.md
401_8
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/
.md
402_0
The DINOv2 model was proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski. DINOv2 is an upgrade of [DINO](https://arxiv.org/abs/2104.14294), a self-supervised method applied on [Vision Transformers](vit). This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. The abstract from the paper is the following: *The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/dinov2).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#overview
#overview
.md
402_1
The model can be traced using `torch.jit.trace` which leverages JIT compilation to optimize the model making it faster to run. Note this still produces some mis-matched elements and the difference between the original model and the traced model is of the order of 1e-4. ```python import torch from transformers import AutoImageProcessor, AutoModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base') model = AutoModel.from_pretrained('facebook/dinov2-base') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs[0] # We have to force return_dict=False for tracing model.config.return_dict = False with torch.no_grad(): traced_model = torch.jit.trace(model, [inputs.pixel_values]) traced_outputs = traced_model(inputs.pixel_values) print((last_hidden_states - traced_outputs[0]).abs().max()) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#usage-tips
#usage-tips
.md
402_2
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DINOv2. - Demo notebooks for DINOv2 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DINOv2). 🌎 <PipelineTag pipeline="image-classification"/> - [`Dinov2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#resources
#resources
.md
402_3
This is the configuration class to store the configuration of a [`Dinov2Model`]. It is used to instantiate an Dinov2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Dinov2 [google/dinov2-base-patch16-224](https://huggingface.co/google/dinov2-base-patch16-224) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. mlp_ratio (`int`, *optional*, defaults to 4): Ratio of the hidden size of the MLPs relative to the `hidden_size`. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 14): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. layerscale_value (`float`, *optional*, defaults to 1.0): Initial value to use for layer scale. drop_path_rate (`float`, *optional*, defaults to 0.0): Stochastic depth rate per sample (when applied in the main path of residual layers). use_swiglu_ffn (`bool`, *optional*, defaults to `False`): Whether to use the SwiGLU feedforward neural network. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. apply_layernorm (`bool`, *optional*, defaults to `True`): Whether to apply layer normalization to the feature maps in case the model is used as backbone. reshape_hidden_states (`bool`, *optional*, defaults to `True`): Whether to reshape the feature maps to 4D tensors of shape `(batch_size, hidden_size, height, width)` in case the model is used as backbone. If `False`, the feature maps will be 3D tensors of shape `(batch_size, seq_len, hidden_size)`. Example: ```python >>> from transformers import Dinov2Config, Dinov2Model >>> # Initializing a Dinov2 dinov2-base-patch16-224 style configuration >>> configuration = Dinov2Config() >>> # Initializing a model (with random weights) from the dinov2-base-patch16-224 style configuration >>> model = Dinov2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
#dinov2config
.md
402_4
The bare DINOv2 Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Dinov2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2model
#dinov2model
.md
402_5
Dinov2 Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Dinov2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <jax>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2forimageclassification
#dinov2forimageclassification
.md
402_6
No docstring available for FlaxDinov2Model Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#flaxdinov2model
#flaxdinov2model
.md
402_7
No docstring available for FlaxDinov2ForImageClassification Methods: __call__ </jax> </frameworkcontent>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#flaxdinov2forimageclassification
#flaxdinov2forimageclassification
.md
402_8
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/
.md
403_0
The UnivNet model was proposed in [UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation](https://arxiv.org/abs/2106.07889) by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kin, and Juntae Kim. The UnivNet model is a generative adversarial network (GAN) trained to synthesize high fidelity speech waveforms. The UnivNet model shared in `transformers` is the *generator*, which maps a conditioning log-mel spectrogram and optional noise sequence to a speech waveform (e.g. a vocoder). Only the generator is required for inference. The *discriminator* used to train the `generator` is not implemented. The abstract from the paper is the following: *Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.* Tips: - The `noise_sequence` argument for [`UnivNetModel.forward`] should be standard Gaussian noise (such as from `torch.randn`) of shape `([batch_size], noise_length, model.config.model_in_channels)`, where `noise_length` should match the length dimension (dimension 1) of the `input_features` argument. If not supplied, it will be randomly generated; a `torch.Generator` can be supplied to the `generator` argument so that the forward pass can be reproduced. (Note that [`UnivNetFeatureExtractor`] will return generated noise by default, so it shouldn't be necessary to generate `noise_sequence` manually.) - Padding added by [`UnivNetFeatureExtractor`] can be removed from the [`UnivNetModel`] output through the [`UnivNetFeatureExtractor.batch_decode`] method, as shown in the usage example below. - Padding the end of each waveform with silence can reduce artifacts at the end of the generated audio sample. This can be done by supplying `pad_end = True` to [`UnivNetFeatureExtractor.__call__`]. See [this issue](https://github.com/seungwonpark/melgan/issues/8) for more details. Usage Example: ```python import torch from scipy.io.wavfile import write from datasets import Audio, load_dataset from transformers import UnivNetFeatureExtractor, UnivNetModel model_id_or_path = "dg845/univnet-dev" model = UnivNetModel.from_pretrained(model_id_or_path) feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path) ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # Resample the audio to the model and feature extractor's sampling rate. ds = ds.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate)) # Pad the end of the converted waveforms to reduce artifacts at the end of the output audio samples. inputs = feature_extractor( ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], pad_end=True, return_tensors="pt" ) with torch.no_grad(): audio = model(**inputs) # Remove the extra padding at the end of the output. audio = feature_extractor.batch_decode(**audio)[0] # Convert to wav file write("sample_audio.wav", feature_extractor.sampling_rate, audio) ``` This model was contributed by [dg845](https://huggingface.co/dg845). To the best of my knowledge, there is no official code release, but an unofficial implementation can be found at [maum-ai/univnet](https://github.com/maum-ai/univnet) with pretrained checkpoints [here](https://github.com/maum-ai/univnet#pre-trained-model).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
#overview
.md
403_1
This is the configuration class to store the configuration of a [`UnivNetModel`]. It is used to instantiate a UnivNet vocoder model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the UnivNet [dg845/univnet-dev](https://huggingface.co/dg845/univnet-dev) architecture, which corresponds to the 'c32' architecture in [maum-ai/univnet](https://github.com/maum-ai/univnet/blob/master/config/default_c32.yaml). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: model_in_channels (`int`, *optional*, defaults to 64): The number of input channels for the UnivNet residual network. This should correspond to `noise_sequence.shape[1]` and the value used in the [`UnivNetFeatureExtractor`] class. model_hidden_channels (`int`, *optional*, defaults to 32): The number of hidden channels of each residual block in the UnivNet residual network. num_mel_bins (`int`, *optional*, defaults to 100): The number of frequency bins in the conditioning log-mel spectrogram. This should correspond to the value used in the [`UnivNetFeatureExtractor`] class. resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 3, 3]`): A tuple of integers defining the kernel sizes of the 1D convolutional layers in the UnivNet residual network. The length of `resblock_kernel_sizes` defines the number of resnet blocks and should match that of `resblock_stride_sizes` and `resblock_dilation_sizes`. resblock_stride_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[8, 8, 4]`): A tuple of integers defining the stride sizes of the 1D convolutional layers in the UnivNet residual network. The length of `resblock_stride_sizes` should match that of `resblock_kernel_sizes` and `resblock_dilation_sizes`. resblock_dilation_sizes (`Tuple[Tuple[int]]` or `List[List[int]]`, *optional*, defaults to `[[1, 3, 9, 27], [1, 3, 9, 27], [1, 3, 9, 27]]`): A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the UnivNet residual network. The length of `resblock_dilation_sizes` should match that of `resblock_kernel_sizes` and `resblock_stride_sizes`. The length of each nested list in `resblock_dilation_sizes` defines the number of convolutional layers per resnet block. kernel_predictor_num_blocks (`int`, *optional*, defaults to 3): The number of residual blocks in the kernel predictor network, which calculates the kernel and bias for each location variable convolution layer in the UnivNet residual network. kernel_predictor_hidden_channels (`int`, *optional*, defaults to 64): The number of hidden channels for each residual block in the kernel predictor network. kernel_predictor_conv_size (`int`, *optional*, defaults to 3): The kernel size of each 1D convolutional layer in the kernel predictor network. kernel_predictor_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for each residual block in the kernel predictor network. initializer_range (`float`, *optional*, defaults to 0.01): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. leaky_relu_slope (`float`, *optional*, defaults to 0.2): The angle of the negative slope used by the leaky ReLU activation. Example: ```python >>> from transformers import UnivNetModel, UnivNetConfig >>> # Initializing a Tortoise TTS style configuration >>> configuration = UnivNetConfig() >>> # Initializing a model (with random weights) from the Tortoise TTS style configuration >>> model = UnivNetModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
#univnetconfig
.md
403_2
Constructs a UnivNet feature extractor. This class extracts log-mel-filter bank features from raw speech using the short time Fourier Transform (STFT). The STFT implementation follows that of TacoTron 2 and Hifi-GAN. This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: feature_size (`int`, *optional*, defaults to 1): The feature dimension of the extracted features. sampling_rate (`int`, *optional*, defaults to 24000): The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). padding_value (`float`, *optional*, defaults to 0.0): The value to pad with when applying the padding strategy defined by the `padding` argument to [`UnivNetFeatureExtractor.__call__`]. Should correspond to audio silence. The `pad_end` argument to `__call__` will also use this padding value. do_normalize (`bool`, *optional*, defaults to `False`): Whether to perform Tacotron 2 normalization on the input. Normalizing can help to significantly improve the performance for some models. num_mel_bins (`int`, *optional*, defaults to 100): The number of mel-frequency bins in the extracted spectrogram features. This should match `UnivNetModel.config.num_mel_bins`. hop_length (`int`, *optional*, defaults to 256): The direct number of samples between sliding windows. Otherwise referred to as "shift" in many papers. Note that this is different from other audio feature extractors such as [`SpeechT5FeatureExtractor`] which take the `hop_length` in ms. win_length (`int`, *optional*, defaults to 1024): The direct number of samples for each sliding window. Note that this is different from other audio feature extractors such as [`SpeechT5FeatureExtractor`] which take the `win_length` in ms. win_function (`str`, *optional*, defaults to `"hann_window"`): Name for the window function used for windowing, must be accessible via `torch.{win_function}` filter_length (`int`, *optional*, defaults to 1024): The number of FFT components to use. If `None`, this is determined using `transformers.audio_utils.optimal_fft_length`. max_length_s (`int`, *optional*, defaults to 10): The maximum input lenght of the model in seconds. This is used to pad the audio. fmin (`float`, *optional*, defaults to 0.0): Minimum mel frequency in Hz. fmax (`float`, *optional*): Maximum mel frequency in Hz. If not set, defaults to `sampling_rate / 2`. mel_floor (`float`, *optional*, defaults to 1e-09): Minimum value of mel frequency banks. Note that the way [`UnivNetFeatureExtractor`] uses `mel_floor` is different than in [`transformers.audio_utils.spectrogram`]. center (`bool`, *optional*, defaults to `False`): Whether to pad the waveform so that frame `t` is centered around time `t * hop_length`. If `False`, frame `t` will start at time `t * hop_length`. compression_factor (`float`, *optional*, defaults to 1.0): The multiplicative compression factor for dynamic range compression during spectral normalization. compression_clip_val (`float`, *optional*, defaults to 1e-05): The clip value applied to the waveform before applying dynamic range compression during spectral normalization. normalize_min (`float`, *optional*, defaults to -11.512925148010254): The min value used for Tacotron 2-style linear normalization. The default is the original value from the Tacotron 2 implementation. normalize_max (`float`, *optional*, defaults to 2.3143386840820312): The max value used for Tacotron 2-style linear normalization. The default is the original value from the Tacotron 2 implementation. model_in_channels (`int`, *optional*, defaults to 64): The number of input channels to the [`UnivNetModel`] model. This should match `UnivNetModel.config.model_in_channels`. pad_end_length (`int`, *optional*, defaults to 10): If padding the end of each waveform, the number of spectrogram frames worth of samples to append. The number of appended samples will be `pad_end_length * hop_length`. return_attention_mask (`bool`, *optional*, defaults to `True`): Whether or not [`~UnivNetFeatureExtractor.__call__`] should return `attention_mask`. Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
#univnetfeatureextractor
.md
403_3
UnivNet GAN vocoder. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`UnivNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetmodel
#univnetmodel
.md
403_4
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/
.md
404_0
<Tip warning={true}> This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2. You can do so by running the following command: `pip install -U transformers==4.40.2`. </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukebox
#jukebox
.md
404_1
The Jukebox model was proposed in [Jukebox: A generative model for music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. It introduces a generative music model which can produce minute long samples that can be conditioned on an artist, genres and lyrics. The abstract from the paper is the following: *We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.* As shown on the following figure, Jukebox is made of 3 `priors` which are decoder only models. They follow the architecture described in [Generating Long Sequences with Sparse Transformers](https://arxiv.org/abs/1904.10509), modified to support longer context length. First, a autoencoder is used to encode the text lyrics. Next, the first (also called `top_prior`) prior attends to the last hidden states extracted from the lyrics encoder. The priors are linked to the previous priors respectively via an `AudioConditioner` module. The`AudioConditioner` upsamples the outputs of the previous prior to raw tokens at a certain audio frame per second resolution. The metadata such as *artist, genre and timing* are passed to each prior, in the form of a start token and positional embedding for the timing data. The hidden states are mapped to the closest codebook vector from the VQVAE in order to convert them to raw audio. ![JukeboxModel](https://gist.githubusercontent.com/ArthurZucker/92c1acaae62ebf1b6a951710bdd8b6af/raw/c9c517bf4eff61393f6c7dec9366ef02bdd059a3/jukebox.svg) This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/openai/jukebox).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#overview
#overview
.md
404_2
- This model only supports inference. This is for a few reasons, mostly because it requires a crazy amount of memory to train. Feel free to open a PR and add what's missing to have a full integration with the hugging face trainer! - This model is very slow, and takes 8h to generate a minute long audio using the 5b top prior on a V100 GPU. In order automaticallay handle the device on which the model should execute, use `accelerate`. - Contrary to the paper, the order of the priors goes from `0` to `1` as it felt more intuitive : we sample starting from `0`. - Primed sampling (conditioning the sampling on raw audio) requires more memory than ancestral sampling and should be used with `fp16` set to `True`. This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/openai/jukebox).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#usage-tips
#usage-tips
.md
404_3
This is the configuration class to store the configuration of a [`JukeboxModel`]. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Instantiating a configuration with the defaults will yield a similar configuration to that of [openai/jukebox-1b-lyrics](https://huggingface.co/openai/jukebox-1b-lyrics) architecture. The downsampling and stride are used to determine downsampling of the input sequence. For example, downsampling = (5,3), and strides = (2, 2) will downsample the audio by 2^5 = 32 to get the first level of codes, and 2**8 = 256 to get the second level codes. This is mostly true for training the top level prior and the upsamplers. Args: vqvae_config (`JukeboxVQVAEConfig`, *optional*): Configuration for the `JukeboxVQVAE` model. prior_config_list (`List[JukeboxPriorConfig]`, *optional*): List of the configs for each of the `JukeboxPrior` of the model. The original architecture uses 3 priors. nb_priors (`int`, *optional*, defaults to 3): Number of prior models that will sequentially sample tokens. Each prior is conditional auto regressive (decoder) model, apart from the top prior, which can include a lyric encoder. The available models were trained using a top prior and 2 upsampler priors. sampling_rate (`int`, *optional*, defaults to 44100): Sampling rate of the raw audio. timing_dims (`int`, *optional*, defaults to 64): Dimensions of the JukeboxRangeEmbedding layer which is equivalent to traditional positional embedding layer. The timing embedding layer converts the absolute and relative position in the currently sampled audio to a tensor of length `timing_dims` that will be added to the music tokens. min_duration (`int`, *optional*, defaults to 0): Minimum duration of the audios to generate max_duration (`float`, *optional*, defaults to 600.0): Maximum duration of the audios to generate max_nb_genres (`int`, *optional*, defaults to 5): Maximum number of genres that can be used to condition a single sample. metadata_conditioning (`bool`, *optional*, defaults to `True`): Whether or not to use metadata conditioning, corresponding to the artist, the genre and the min/maximum duration. Example: ```python >>> from transformers import JukeboxModel, JukeboxConfig >>> # Initializing a Jukebox configuration >>> configuration = JukeboxConfig() >>> # Initializing a model from the configuration >>> model = JukeboxModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxconfig
#jukeboxconfig
.md
404_4
This is the configuration class to store the configuration of a [`JukeboxPrior`]. It is used to instantiate a `JukeboxPrior` according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the top level prior from the [openai/jukebox-1b-lyrics](https://huggingface.co/openai/jukebox -1b-lyrics) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: act_fn (`str`, *optional*, defaults to `"quick_gelu"`): Activation function. alignment_head (`int`, *optional*, defaults to 2): Head that is responsible of the alignment between lyrics and music. Only used to compute the lyric to audio alignment alignment_layer (`int`, *optional*, defaults to 68): Index of the layer that is responsible of the alignment between lyrics and music. Only used to compute the lyric to audio alignment attention_multiplier (`float`, *optional*, defaults to 0.25): Multiplier coefficient used to define the hidden dimension of the attention layers. 0.25 means that 0.25*width of the model will be used. attention_pattern (`str`, *optional*, defaults to `"enc_dec_with_lyrics"`): Which attention pattern to use for the decoder/ attn_dropout (`int`, *optional*, defaults to 0): Dropout probability for the post-attention layer dropout in the decoder. attn_res_scale (`bool`, *optional*, defaults to `False`): Whether or not to scale the residuals in the attention conditioner block. blocks (`int`, *optional*, defaults to 64): Number of blocks used in the `block_attn`. A sequence of length seq_len is factored as `[blocks, seq_len // blocks]` in the `JukeboxAttention` layer. conv_res_scale (`int`, *optional*): Whether or not to scale the residuals in the conditioner block. Since the top level prior does not have a conditioner, the default value is to None and should not be modified. num_layers (`int`, *optional*, defaults to 72): Number of layers of the transformer architecture. emb_dropout (`int`, *optional*, defaults to 0): Embedding dropout used in the lyric decoder. encoder_config (`JukeboxPriorConfig`, *optional*) : Configuration of the encoder which models the prior on the lyrics. encoder_loss_fraction (`float`, *optional*, defaults to 0.4): Multiplication factor used in front of the lyric encoder loss. hidden_size (`int`, *optional*, defaults to 2048): Hidden dimension of the attention layers. init_scale (`float`, *optional*, defaults to 0.2): Initialization scales for the prior modules. is_encoder_decoder (`bool`, *optional*, defaults to `True`): Whether or not the prior is an encoder-decoder model. In case it is not, and `nb_relevant_lyric_tokens` is greater than 0, the `encoder` args should be specified for the lyric encoding. mask (`bool`, *optional*, defaults to `False`): Whether or not to mask the previous positions in the attention. max_duration (`int`, *optional*, defaults to 600): Maximum supported duration of the generated song in seconds. max_nb_genres (`int`, *optional*, defaults to 1): Maximum number of genres that can be used to condition the model. merged_decoder (`bool`, *optional*, defaults to `True`): Whether or not the decoder and the encoder inputs are merged. This is used for the separated encoder-decoder architecture metadata_conditioning (`bool`, *optional*, defaults to `True)`: Whether or not to condition on the artist and genre metadata. metadata_dims (`List[int]`, *optional*, defaults to `[604, 7898]`): Number of genres and the number of artists that were used to train the embedding layers of the prior models. min_duration (`int`, *optional*, defaults to 0): Minimum duration of the generated audio on which the model was trained. mlp_multiplier (`float`, *optional*, defaults to 1.0): Multiplier coefficient used to define the hidden dimension of the MLP layers. 0.25 means that 0.25*width of the model will be used. music_vocab_size (`int`, *optional*, defaults to 2048): Number of different music tokens. Should be similar to the `JukeboxVQVAEConfig.nb_discrete_codes`. n_ctx (`int`, *optional*, defaults to 6144): Number of context tokens for each prior. The context tokens are the music tokens that are attended to when generating music tokens. n_heads (`int`, *optional*, defaults to 2): Number of attention heads. nb_relevant_lyric_tokens (`int`, *optional*, defaults to 384): Number of lyric tokens that are used when sampling a single window of length `n_ctx` res_conv_depth (`int`, *optional*, defaults to 3): Depth of the `JukeboxDecoderConvBock` used to upsample the previously sampled audio in the `JukeboxMusicTokenConditioner`. res_conv_width (`int`, *optional*, defaults to 128): Width of the `JukeboxDecoderConvBock` used to upsample the previously sampled audio in the `JukeboxMusicTokenConditioner`. res_convolution_multiplier (`int`, *optional*, defaults to 1): Multiplier used to scale the `hidden_dim` of the `JukeboxResConv1DBlock`. res_dilation_cycle (`int`, *optional*): Dilation cycle used to define the `JukeboxMusicTokenConditioner`. Usually similar to the ones used in the corresponding level of the VQVAE. The first prior does not use it as it is not conditioned on upper level tokens. res_dilation_growth_rate (`int`, *optional*, defaults to 1): Dilation grow rate used between each convolutionnal block of the `JukeboxMusicTokenConditioner` res_downs_t (`List[int]`, *optional*, defaults to `[3, 2, 2]`): Downsampling rates used in the audio conditioning network res_strides_t (`List[int]`, *optional*, defaults to `[2, 2, 2]`): Striding used in the audio conditioning network resid_dropout (`int`, *optional*, defaults to 0): Residual dropout used in the attention pattern. sampling_rate (`int`, *optional*, defaults to 44100): Sampling rate used for training. spread (`int`, *optional*): Spread used in the `summary_spread_attention` pattern timing_dims (`int`, *optional*, defaults to 64): Dimension of the timing embedding. zero_out (`bool`, *optional*, defaults to `False`): Whether or not to zero out convolution weights when initializing.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
#jukeboxpriorconfig
.md
404_5
This is the configuration class to store the configuration of a [`JukeboxVQVAE`]. It is used to instantiate a `JukeboxVQVAE` according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the VQVAE from [openai/jukebox-1b-lyrics](https://huggingface.co/openai/jukebox-1b-lyrics) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: act_fn (`str`, *optional*, defaults to `"relu"`): Activation function of the model. nb_discrete_codes (`int`, *optional*, defaults to 2048): Number of codes of the VQVAE. commit (`float`, *optional*, defaults to 0.02): Commit loss multiplier. conv_input_shape (`int`, *optional*, defaults to 1): Number of audio channels. conv_res_scale (`bool`, *optional*, defaults to `False`): Whether or not to scale the residuals of the `JukeboxResConv1DBlock`. embed_dim (`int`, *optional*, defaults to 64): Embedding dimension of the codebook vectors. hop_fraction (`List[int]`, *optional*, defaults to `[0.125, 0.5, 0.5]`): Fraction of non-intersecting window used when continuing the sampling process. levels (`int`, *optional*, defaults to 3): Number of hierarchical levels that used in the VQVAE. lmu (`float`, *optional*, defaults to 0.99): Used in the codebook update, exponential moving average coefficient. For more detail refer to Appendix A.1 of the original [VQVAE paper](https://arxiv.org/pdf/1711.00937v2.pdf) multipliers (`List[int]`, *optional*, defaults to `[2, 1, 1]`): Depth and width multipliers used for each level. Used on the `res_conv_width` and `res_conv_depth` res_conv_depth (`int`, *optional*, defaults to 4): Depth of the encoder and decoder block. If no `multipliers` are used, this is the same for each level. res_conv_width (`int`, *optional*, defaults to 32): Width of the encoder and decoder block. If no `multipliers` are used, this is the same for each level. res_convolution_multiplier (`int`, *optional*, defaults to 1): Scaling factor of the hidden dimension used in the `JukeboxResConv1DBlock`. res_dilation_cycle (`int`, *optional*): Dilation cycle value used in the `JukeboxResnet`. If an int is used, each new Conv1 block will have a depth reduced by a power of `res_dilation_cycle`. res_dilation_growth_rate (`int`, *optional*, defaults to 3): Resnet dilation growth rate used in the VQVAE (dilation_growth_rate ** depth) res_downs_t (`List[int]`, *optional*, defaults to `[3, 2, 2]`): Downsampling rate for each level of the hierarchical VQ-VAE. res_strides_t (`List[int]`, *optional*, defaults to `[2, 2, 2]`): Stride used for each level of the hierarchical VQ-VAE. sample_length (`int`, *optional*, defaults to 1058304): Provides the max input shape of the VQVAE. Is used to compute the input shape of each level. init_scale (`float`, *optional*, defaults to 0.2): Initialization scale. zero_out (`bool`, *optional*, defaults to `False`): Whether or not to zero out convolution weights when initializing.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvaeconfig
#jukeboxvqvaeconfig
.md
404_6
Constructs a Jukebox tokenizer. Jukebox can be conditioned on 3 different inputs : - Artists, unique ids are associated to each artist from the provided dictionary. - Genres, unique ids are associated to each genre from the provided dictionary. - Lyrics, character based tokenization. Must be initialized with the list of characters that are inside the vocabulary. This tokenizer does not require training. It should be able to process a different number of inputs: as the conditioning of the model can be done on the three different queries. If None is provided, defaults values will be used.: Depending on the number of genres on which the model should be conditioned (`n_genres`). ```python >>> from transformers import JukeboxTokenizer >>> tokenizer = JukeboxTokenizer.from_pretrained("openai/jukebox-1b-lyrics") >>> tokenizer("Alan Jackson", "Country Rock", "old town road")["input_ids"] [tensor([[ 0, 0, 0, 6785, 546, 41, 38, 30, 76, 46, 41, 49, 40, 76, 44, 41, 27, 30]]), tensor([[ 0, 0, 0, 145, 0]]), tensor([[ 0, 0, 0, 145, 0]])] ``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. <Tip> If nothing is provided, the genres and the artist will either be selected randomly or set to None </Tip> This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to: this superclass for more information regarding those methods. However the code does not allow that and only supports composing from various genres. Args: artists_file (`str`): Path to the vocabulary file which contains a mapping between artists and ids. The default file supports both "v2" and "v3" genres_file (`str`): Path to the vocabulary file which contain a mapping between genres and ids. lyrics_file (`str`): Path to the vocabulary file which contains the accepted characters for the lyrics tokenization. version (`List[str]`, `optional`, default to `["v3", "v2", "v2"]`) : List of the tokenizer versions. The `5b-lyrics`'s top level prior model was trained using `v3` instead of `v2`. n_genres (`int`, `optional`, defaults to 1): Maximum number of genres to use for composition. max_n_lyric_tokens (`int`, `optional`, defaults to 512): Maximum number of lyric tokens to keep. unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. Methods: save_vocabulary
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxtokenizer
#jukeboxtokenizer
.md
404_7
The bare JUKEBOX Model used for music generation. 4 sampling techniques are supported : `primed_sample`, `upsample`, `continue_sample` and `ancestral_sample`. It does not have a `forward` method as the training is not end to end. If you want to fine-tune the model, it is recommended to use the `JukeboxPrior` class and train each prior individually. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config (`JukeboxConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: ancestral_sample - primed_sample - continue_sample - upsample - _sample
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxmodel
#jukeboxmodel
.md
404_8
The JukeboxPrior class, which is a wrapper around the various conditioning and the transformer. JukeboxPrior can be seen as language models trained on music. They model the next `music token` prediction task. If a (lyric) `encoderù is defined, it also models the `next character` prediction on the lyrics. Can be conditionned on timing, artist, genre, lyrics and codes from lower-levels Priors. Args: config (`JukeboxPriorConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. level (`int`, *optional*): Current level of the Prior. Should be in range `[0,nb_priors]`. nb_priors (`int`, *optional*, defaults to 3): Total number of priors. vqvae_encoder (`Callable`, *optional*): Encoding method of the VQVAE encoder used in the forward pass of the model. Passing functions instead of the vqvae module to avoid getting the parameters. vqvae_decoder (`Callable`, *optional*): Decoding method of the VQVAE decoder used in the forward pass of the model. Passing functions instead of the vqvae module to avoid getting the parameters. Methods: sample - forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxprior
#jukeboxprior
.md
404_9
The Hierarchical VQ-VAE model used in Jukebox. This model follows the Hierarchical VQVAE paper from [Will Williams, Sam Ringer, Tom Ash, John Hughes, David MacLeod, Jamie Dougherty](https://arxiv.org/abs/2002.08111). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config (`JukeboxConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - encode - decode
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvae
#jukeboxvqvae
.md
404_10
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/
.md
405_0
The MusicGen model was proposed in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez. MusicGen is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or *audio codes*, conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec, to recover the audio waveform. Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g. hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass. The abstract from the paper is the following: *We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.* This model was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original code can be found [here](https://github.com/facebookresearch/audiocraft). The pre-trained checkpoints can be found on the [Hugging Face Hub](https://huggingface.co/models?sort=downloads&search=facebook%2Fmusicgen-).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#overview
#overview
.md
405_1
- After downloading the original checkpoints from [here](https://github.com/facebookresearch/audiocraft/blob/main/docs/MUSICGEN.md#importing--exporting-models) , you can convert them using the **conversion script** available at `src/transformers/models/musicgen/convert_musicgen_transformers.py` with the following command: ```bash python src/transformers/models/musicgen/convert_musicgen_transformers.py \ --checkpoint small --pytorch_dump_folder /output/path --safe_serialization ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#usage-tips
#usage-tips
.md
405_2
MusicGen is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default, and can be explicitly specified by setting `do_sample=True` in the call to [`MusicgenForConditionalGeneration.generate`], or by overriding the model's generation config (see below). Generation is limited by the sinusoidal positional embeddings to 30 second inputs. Meaning, MusicGen cannot generate more than 30 seconds of audio (1503 tokens), and input audio passed by Audio-Prompted Generation contributes to this limit so, given an input of 20 seconds of audio, MusicGen cannot generate more than 10 seconds of additional audio. Transformers supports both mono (1-channel) and stereo (2-channel) variants of MusicGen. The mono channel versions generate a single set of codebooks. The stereo versions generate 2 sets of codebooks, 1 for each channel (left/right), and each set of codebooks is decoded independently through the audio compression model. The audio streams for each channel are combined to give the final stereo output.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#generation
#generation
.md
405_3
The inputs for unconditional (or 'null') generation can be obtained through the method [`MusicgenForConditionalGeneration.get_unconditional_inputs`]: ```python >>> from transformers import MusicgenForConditionalGeneration >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> unconditional_inputs = model.get_unconditional_inputs(num_samples=1) >>> audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256) ``` The audio outputs are a three-dimensional Torch tensor of shape `(batch_size, num_channels, sequence_length)`. To listen to the generated audio samples, you can either play them in an ipynb notebook: ```python from IPython.display import Audio sampling_rate = model.config.audio_encoder.sampling_rate Audio(audio_values[0].numpy(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```python >>> import scipy >>> sampling_rate = model.config.audio_encoder.sampling_rate >>> scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy()) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#unconditional-generation
#unconditional-generation
.md
405_4
The model can generate an audio sample conditioned on a text prompt through use of the [`MusicgenProcessor`] to pre-process the inputs: ```python >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small") >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> inputs = processor( ... text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` The `guidance_scale` is used in classifier free guidance (CFG), setting the weighting between the conditional logits (which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or 'null' prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer audio quality. CFG is enabled by setting `guidance_scale > 1`. For best results, use `guidance_scale=3` (default).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#text-conditional-generation
#text-conditional-generation
.md
405_5
The same [`MusicgenProcessor`] can be used to pre-process an audio prompt that is used for audio continuation. In the following example, we load an audio file using the 🤗 Datasets library, which can be pip installed through the command below: ```bash pip install --upgrade pip pip install datasets[audio] ``` ```python >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small") >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True) >>> sample = next(iter(dataset))["audio"] >>> # take the first half of the audio sample >>> sample["array"] = sample["array"][: len(sample["array"]) // 2] >>> inputs = processor( ... audio=sample["array"], ... sampling_rate=sample["sampling_rate"], ... text=["80s blues track with groovy saxophone"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` For batched audio-prompted generation, the generated `audio_values` can be post-processed to remove padding by using the [`MusicgenProcessor`] class: ```python >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small") >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True) >>> sample = next(iter(dataset))["audio"] >>> # take the first quarter of the audio sample >>> sample_1 = sample["array"][: len(sample["array"]) // 4] >>> # take the first half of the audio sample >>> sample_2 = sample["array"][: len(sample["array"]) // 2] >>> inputs = processor( ... audio=[sample_1, sample_2], ... sampling_rate=sample["sampling_rate"], ... text=["80s blues track with groovy saxophone", "90s rock song with loud guitars and heavy drums"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) >>> # post-process to remove padding from the batched audio >>> audio_values = processor.batch_decode(audio_values, padding_mask=inputs.padding_mask) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#audio-prompted-generation
#audio-prompted-generation
.md
405_6
The default parameters that control the generation process, such as sampling, guidance scale and number of generated tokens, can be found in the model's generation config, and updated as desired: ```python >>> from transformers import MusicgenForConditionalGeneration >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> # inspect the default generation config >>> model.generation_config >>> # increase the guidance scale to 4.0 >>> model.generation_config.guidance_scale = 4.0 >>> # decrease the max length to 256 tokens >>> model.generation_config.max_length = 256 ``` Note that any arguments passed to the generate method will **supersede** those in the generation config, so setting `do_sample=False` in the call to generate will supersede the setting of `model.generation_config.do_sample` in the generation config.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#generation-configuration
#generation-configuration
.md
405_7
The MusicGen model can be de-composed into three distinct stages: 1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5 2. MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations 3. Audio encoder/decoder: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class [`MusicgenForCausalLM`], or as a composite model that includes the text encoder and audio encoder/decoder, corresponding to the class [`MusicgenForConditionalGeneration`]. If only the decoder needs to be loaded from the pre-trained checkpoint, it can be loaded by first specifying the correct config, or be accessed through the `.decoder` attribute of the composite model: ```python >>> from transformers import AutoConfig, MusicgenForCausalLM, MusicgenForConditionalGeneration >>> # Option 1: get decoder config and pass to `.from_pretrained` >>> decoder_config = AutoConfig.from_pretrained("facebook/musicgen-small").decoder >>> decoder = MusicgenForCausalLM.from_pretrained("facebook/musicgen-small", **decoder_config) >>> # Option 2: load the entire composite model, but only return the decoder >>> decoder = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small").decoder ``` Since the text encoder and audio encoder/decoder models are frozen during training, the MusicGen decoder [`MusicgenForCausalLM`] can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can be combined with the frozen text encoder and audio encoder/decoders to recover the composite [`MusicgenForConditionalGeneration`] model. Tips: * MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model. * Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable `do_sample` in the call to [`MusicgenForConditionalGeneration.generate`]
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#model-structure
#model-structure
.md
405_8
This is the configuration class to store the configuration of an [`MusicgenDecoder`]. It is used to instantiate a MusicGen decoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MusicGen [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 2048): Vocabulary size of the MusicgenDecoder model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MusicgenDecoder`]. hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 24): Number of decoder layers. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer block. ffn_dim (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer block. activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the decoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, text_encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. Typically, set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_factor (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. scale_embedding (`bool`, *optional*, defaults to `False`): Scale embeddings by diving by sqrt(hidden_size). use_cache (`bool`, *optional*, defaults to `True`): Whether the model should return the last key/values attentions (not used by all models) num_codebooks (`int`, *optional*, defaults to 4): The number of parallel codebooks forwarded to the model. tie_word_embeddings(`bool`, *optional*, defaults to `False`): Whether input and output word embeddings should be tied. audio_channels (`int`, *optional*, defaults to 1 Number of channels in the audio data. Either 1 for mono or 2 for stereo. Stereo models generate a separate audio stream for the left/right output channels. Mono models generate a single audio stream output.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgendecoderconfig
#musicgendecoderconfig
.md
405_9
This is the configuration class to store the configuration of a [`MusicgenModel`]. It is used to instantiate a MusicGen model according to the specified arguments, defining the text encoder, audio encoder and MusicGen decoder configs. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: kwargs (*optional*): Dictionary of keyword arguments. Notably: - **text_encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the text encoder config. - **audio_encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the audio encoder config. - **decoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines the decoder config. Example: ```python >>> from transformers import ( ... MusicgenConfig, ... MusicgenDecoderConfig, ... T5Config, ... EncodecConfig, ... MusicgenForConditionalGeneration, ... ) >>> # Initializing text encoder, audio encoder, and decoder model configurations >>> text_encoder_config = T5Config() >>> audio_encoder_config = EncodecConfig() >>> decoder_config = MusicgenDecoderConfig() >>> configuration = MusicgenConfig.from_sub_models_config( ... text_encoder_config, audio_encoder_config, decoder_config ... ) >>> # Initializing a MusicgenForConditionalGeneration (with random weights) from the facebook/musicgen-small style configuration >>> model = MusicgenForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config >>> config_text_encoder = model.config.text_encoder >>> config_audio_encoder = model.config.audio_encoder >>> config_decoder = model.config.decoder >>> # Saving the model, including its configuration >>> model.save_pretrained("musicgen-model") >>> # loading model and config from pretrained folder >>> musicgen_config = MusicgenConfig.from_pretrained("musicgen-model") >>> model = MusicgenForConditionalGeneration.from_pretrained("musicgen-model", config=musicgen_config) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenconfig
#musicgenconfig
.md
405_10
Constructs a MusicGen processor which wraps an EnCodec feature extractor and a T5 tokenizer into a single processor class. [`MusicgenProcessor`] offers all the functionalities of [`EncodecFeatureExtractor`] and [`TTokenizer`]. See [`~MusicgenProcessor.__call__`] and [`~MusicgenProcessor.decode`] for more information. Args: feature_extractor (`EncodecFeatureExtractor`): An instance of [`EncodecFeatureExtractor`]. The feature extractor is a required input. tokenizer (`T5Tokenizer`): An instance of [`T5Tokenizer`]. The tokenizer is a required input.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenprocessor
#musicgenprocessor
.md
405_11
The bare Musicgen decoder model outputting raw hidden-states without any specific head on top. The Musicgen model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an encoder decoder transformer trained on the task of conditional music generation This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MusicgenConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenmodel
#musicgenmodel
.md
405_12
The MusicGen decoder model with a language modelling head on top. The Musicgen model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an encoder decoder transformer trained on the task of conditional music generation This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MusicgenConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenforcausallm
#musicgenforcausallm
.md
405_13
The composite MusicGen model with a text encoder, audio encoder and Musicgen decoder, for music generation tasks with one or both of text and audio prompts. The Musicgen model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an encoder decoder transformer trained on the task of conditional music generation This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MusicgenConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/musicgen.md
https://huggingface.co/docs/transformers/en/model_doc/musicgen/#musicgenforconditionalgeneration
#musicgenforconditionalgeneration
.md
405_14
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/
.md
406_0
The Swin Transformer was proposed in [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. The abstract from the paper is the following: *This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \bold{S}hifted \bold{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png" alt="drawing" width="600"/> <small> Swin Transformer architecture. Taken from the <a href="https://arxiv.org/abs/2102.03334">original paper</a>.</small> This model was contributed by [novice03](https://huggingface.co/novice03). The Tensorflow version of this model was contributed by [amyeroberts](https://huggingface.co/amyeroberts). The original code can be found [here](https://github.com/microsoft/Swin-Transformer).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#overview
#overview
.md
406_1
- Swin pads the inputs supporting any input height and width (if divisible by `32`). - Swin can be used as a *backbone*. When `output_hidden_states = True`, it will output both `hidden_states` and `reshaped_hidden_states`. The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than `(batch_size, sequence_length, num_channels)`.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#usage-tips
#usage-tips
.md
406_2
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer. <PipelineTag pipeline="image-classification"/> - [`SwinForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) Besides that: - [`SwinForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#resources
#resources
.md
406_3
This is the configuration class to store the configuration of a [`SwinModel`]. It is used to instantiate a Swin model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Swin [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 4): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. embed_dim (`int`, *optional*, defaults to 96): Dimensionality of patch embedding. depths (`list(int)`, *optional*, defaults to `[2, 2, 6, 2]`): Depth of each layer in the Transformer encoder. num_heads (`list(int)`, *optional*, defaults to `[3, 6, 12, 24]`): Number of attention heads in each layer of the Transformer encoder. window_size (`int`, *optional*, defaults to 7): Size of windows. mlp_ratio (`float`, *optional*, defaults to 4.0): Ratio of MLP hidden dimensionality to embedding dimensionality. qkv_bias (`bool`, *optional*, defaults to `True`): Whether or not a learnable bias should be added to the queries, keys and values. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings and encoder. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. drop_path_rate (`float`, *optional*, defaults to 0.1): Stochastic depth rate. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. use_absolute_embeddings (`bool`, *optional*, defaults to `False`): Whether or not to add absolute position embeddings to the patch embeddings. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. encoder_stride (`int`, *optional*, defaults to 32): Factor to increase the spatial resolution by in the decoder head for masked image modeling. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. Example: ```python >>> from transformers import SwinConfig, SwinModel >>> # Initializing a Swin microsoft/swin-tiny-patch4-window7-224 style configuration >>> configuration = SwinConfig() >>> # Initializing a model (with random weights) from the microsoft/swin-tiny-patch4-window7-224 style configuration >>> model = SwinModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinconfig
#swinconfig
.md
406_4
The bare Swin Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwinConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. add_pooling_layer (`bool`, *optional*, defaults to `True`): Whether or not to apply pooling layer. use_mask_token (`bool`, *optional*, defaults to `False`): Whether or not to create and apply mask tokens in the embedding layer. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinmodel
#swinmodel
.md
406_5
Swin Model with a decoder on top for masked image modeling, as proposed in [SimMIM](https://arxiv.org/abs/2111.09886). <Tip> Note that we provide a script to pre-train this model on custom data in our [examples directory](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwinConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinformaskedimagemodeling
#swinformaskedimagemodeling
.md
406_6
Swin Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. <Tip> Note that it's possible to fine-tune Swin on higher resolution images than the ones it has been trained on, by setting `interpolate_pos_encoding` to `True` in the forward of the model. This will interpolate the pre-trained position embeddings to the higher resolution. </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwinConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#swinforimageclassification
#swinforimageclassification
.md
406_7
No docstring available for TFSwinModel Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#tfswinmodel
#tfswinmodel
.md
406_8
No docstring available for TFSwinForMaskedImageModeling Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#tfswinformaskedimagemodeling
#tfswinformaskedimagemodeling
.md
406_9
No docstring available for TFSwinForImageClassification Methods: call </tf> </frameworkcontent>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/swin.md
https://huggingface.co/docs/transformers/en/model_doc/swin/#tfswinforimageclassification
#tfswinforimageclassification
.md
406_10
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/
.md
407_0
The Perceiver IO model was proposed in [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. Perceiver IO is a generalization of [Perceiver](https://arxiv.org/abs/2103.03206) to handle arbitrary outputs in addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio. This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example, Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs. The abstract from the paper is the following: *The recently-proposed Perceiver model obtains good results on several domains (images, audio, multimodal, point clouds) while scaling linearly in compute and memory with the input size. While the Perceiver supports many kinds of inputs, it can only produce very simple outputs such as class scores. Perceiver IO overcomes this limitation without sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce outputs of arbitrary size and semantics. Perceiver IO still decouples model depth from data size and still scales linearly with data size, but now with respect to both input and output sizes. The full Perceiver IO model achieves strong results on tasks with highly structured output spaces, such as natural language and visual understanding, StarCraft II, and multi-task and multi-modal domains. As highlights, Perceiver IO matches a Transformer-based BERT baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation.* Here's a TLDR explaining how Perceiver works: The main problem with the self-attention mechanism of the Transformer is that the time and memory requirements scale quadratically with the sequence length. Hence, models like BERT and RoBERTa are limited to a max sequence length of 512 tokens. Perceiver aims to solve this issue by, instead of performing self-attention on the inputs, perform it on a set of latent variables, and only use the inputs for cross-attention. In this way, the time and memory requirements don't depend on the length of the inputs anymore, as one uses a fixed amount of latent variables, like 256 or 512. These are randomly initialized, after which they are trained end-to-end using backpropagation. Internally, [`PerceiverModel`] will create the latents, which is a tensor of shape `(batch_size, num_latents, d_latents)`. One must provide `inputs` (which could be text, images, audio, you name it!) to the model, which it will use to perform cross-attention with the latents. The output of the Perceiver encoder is a tensor of the same shape. One can then, similar to BERT, convert the last hidden states of the latents to classification logits by averaging along the sequence dimension, and placing a linear layer on top of that to project the `d_latents` to `num_labels`. This was the idea of the original Perceiver paper. However, it could only output classification logits. In a follow-up work, PerceiverIO, they generalized it to let the model also produce outputs of arbitrary size. How, you might ask? The idea is actually relatively simple: one defines outputs of an arbitrary size, and then applies cross-attention with the last hidden states of the latents, using the outputs as queries, and the latents as keys and values. So let's say one wants to perform masked language modeling (BERT-style) with the Perceiver. As the Perceiver's input length will not have an impact on the computation time of the self-attention layers, one can provide raw bytes, providing `inputs` of length 2048 to the model. If one now masks out certain of these 2048 tokens, one can define the `outputs` as being of shape: `(batch_size, 2048, 768)`. Next, one performs cross-attention with the final hidden states of the latents to update the `outputs` tensor. After cross-attention, one still has a tensor of shape `(batch_size, 2048, 768)`. One can then place a regular language modeling head on top, to project the last dimension to the vocabulary size of the model, i.e. creating logits of shape `(batch_size, 2048, 262)` (as Perceiver uses a vocabulary size of 262 byte IDs). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture. Taken from the <a href="https://arxiv.org/abs/2105.15203">original paper</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/deepmind/deepmind-research/tree/master/perceiver). <Tip warning={true}> Perceiver does **not** work with `torch.nn.DataParallel` due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035) </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#overview
#overview
.md
407_1
- The quickest way to get started with the Perceiver is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Perceiver). - Refer to the [blog post](https://huggingface.co/blog/perceiver) if you want to fully understand how the model works and is implemented in the library. Note that the models available in the library only showcase some examples of what you can do with the Perceiver. There are many more use cases, including question answering, named-entity recognition, object detection, audio classification, video classification, etc. - [Text classification task guide](../tasks/sequence_classification) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Image classification task guide](../tasks/image_classification)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#resources
#resources
.md
407_2
models.perceiver.modeling_perceiver.PerceiverModelOutput Base class for Perceiver base model's outputs, with potential hidden states, attentions and cross-attentions. Args: logits (`torch.FloatTensor` of shape `(batch_size, num_labels)`): Classification (or regression if config.num_labels==1) scores (before SoftMax). last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the model. hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. models.perceiver.modeling_perceiver.PerceiverDecoderOutput Base class for Perceiver decoder outputs, with potential cross-attentions. Args: logits (`torch.FloatTensor` of shape `(batch_size, num_labels)`): Output of the basic decoder. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput Base class for Perceiver's masked language model outputs. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): Masked language modeling (MLM) loss. logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, num_latents, num_latents)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. models.perceiver.modeling_perceiver.PerceiverClassifierOutput Base class for Perceiver's outputs of sequence/image classification models, optical flow and multimodal autoencoding. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): Classification (or regression if config.num_labels==1) loss. logits (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`): Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiver-specific-outputs
#perceiver-specific-outputs
.md
407_3
This is the configuration class to store the configuration of a [`PerceiverModel`]. It is used to instantiate an Perceiver model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Perceiver [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_latents (`int`, *optional*, defaults to 256): The number of latents. d_latents (`int`, *optional*, defaults to 1280): Dimension of the latent embeddings. d_model (`int`, *optional*, defaults to 768): Dimension of the inputs. Should only be provided in case [*PerceiverTextPreprocessor*] is used or no preprocessor is provided. num_blocks (`int`, *optional*, defaults to 1): Number of blocks in the Transformer encoder. num_self_attends_per_block (`int`, *optional*, defaults to 26): The number of self-attention layers per block. num_self_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each self-attention layer in the Transformer encoder. num_cross_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each cross-attention layer in the Transformer encoder. qk_channels (`int`, *optional*): Dimension to project the queries + keys before applying attention in the cross-attention and self-attention layers of the encoder. Will default to preserving the dimension of the queries if not specified. v_channels (`int`, *optional*): Dimension to project the values before applying attention in the cross-attention and self-attention layers of the encoder. Will default to preserving the dimension of the queries if not specified. cross_attention_shape_for_attention (`str`, *optional*, defaults to `"kv"`): Dimension to use when downsampling the queries and keys in the cross-attention layer of the encoder. self_attention_widening_factor (`int`, *optional*, defaults to 1): Dimension of the feed-forward layer in the cross-attention layer of the Transformer encoder. cross_attention_widening_factor (`int`, *optional*, defaults to 1): Dimension of the feed-forward layer in the self-attention layers of the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. use_query_residual (`float`, *optional*, defaults to `True`): Whether to add a query residual in the cross-attention layer of the encoder. vocab_size (`int`, *optional*, defaults to 262): Vocabulary size for the masked language modeling model. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that the masked language modeling model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). image_size (`int`, *optional*, defaults to 56): Size of the images after preprocessing, for [`PerceiverForImageClassificationLearned`]. train_size (`List[int]`, *optional*, defaults to `[368, 496]`): Training size of the images for the optical flow model. num_frames (`int`, *optional*, defaults to 16): Number of video frames used for the multimodal autoencoding model. audio_samples_per_frame (`int`, *optional*, defaults to 1920): Number of audio samples per frame for the multimodal autoencoding model. samples_per_patch (`int`, *optional*, defaults to 16): Number of audio samples per patch when preprocessing the audio for the multimodal autoencoding model. output_shape (`List[int]`, *optional*, defaults to `[1, 16, 224, 224]`): Shape of the output (batch_size, num_frames, height, width) for the video decoder queries of the multimodal autoencoding model. This excludes the channel dimension. output_num_channels (`int`, *optional*, defaults to 512): Number of output channels for each modalitiy decoder. Example: ```python >>> from transformers import PerceiverModel, PerceiverConfig >>> # Initializing a Perceiver deepmind/language-perceiver style configuration >>> configuration = PerceiverConfig() >>> # Initializing a model from the deepmind/language-perceiver style configuration >>> model = PerceiverModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverconfig
#perceiverconfig
.md
407_4
Construct a Perceiver tokenizer. The Perceiver simply uses raw bytes utf-8 encoding. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. bos_token (`str`, *optional*, defaults to `"[BOS]"`): The BOS token (reserved in the vocab, but not actually used). eos_token (`str`, *optional*, defaults to `"[EOS]"`): The end of sequence token (reserved in the vocab, but not actually used). <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> mask_token (`str`, *optional*, defaults to `"[MASK]"`): The MASK token, useful for masked language modeling. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The CLS token (reserved in the vocab, but not actually used). sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from two sequences. Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivertokenizer
#perceivertokenizer
.md
407_5
No docstring available for PerceiverFeatureExtractor Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverfeatureextractor
#perceiverfeatureextractor
.md
407_6
Constructs a Perceiver image processor. Args: do_center_crop (`bool`, `optional`, defaults to `True`): Whether or not to center crop the image. If the input size if smaller than `crop_size` along any edge, the image will be padded with zeros and then center cropped. Can be overridden by the `do_center_crop` parameter in the `preprocess` method. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 256, "width": 256}`): Desired output size when applying center-cropping. Can be overridden by the `crop_size` parameter in the `preprocess` method. do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image to `(size["height"], size["width"])`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"height": 224, "width": 224}`): Size of the image after resizing. Can be overridden by the `size` parameter in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): Defines the resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Defines the scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize: Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimageprocessor
#perceiverimageprocessor
.md
407_7
models.perceiver.modeling_perceiver.PerceiverTextPreprocessor Text preprocessing for Perceiver Encoder. Can be used to embed `inputs` and add positional encodings. The dimensionality of the embeddings is determined by the `d_model` attribute of the configuration. Args: config ([`PerceiverConfig`]): Model configuration.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivertextpreprocessor
#perceivertextpreprocessor
.md
407_8
models.perceiver.modeling_perceiver.PerceiverImagePreprocessor Image preprocessing for Perceiver Encoder. Note: the *out_channels* argument refers to the output channels of a convolutional layer, if *prep_type* is set to "conv1x1" or "conv". If one adds absolute position embeddings, one must make sure the *num_channels* of the position encoding kwargs are set equal to the *out_channels*. Args: config ([*PerceiverConfig*]): Model configuration. prep_type (`str`, *optional*, defaults to `"conv"`): Preprocessing type. Can be "conv1x1", "conv", "patches", "pixels". spatial_downsample (`int`, *optional*, defaults to 4): Spatial downsampling factor. temporal_downsample (`int`, *optional*, defaults to 1): Temporal downsampling factor (only relevant in case a time dimension is present). position_encoding_type (`str`, *optional*, defaults to `"fourier"`): Position encoding type. Can be "fourier" or "trainable". in_channels (`int`, *optional*, defaults to 3): Number of channels in the input. out_channels (`int`, *optional*, defaults to 64): Number of channels in the output. conv_after_patching (`bool`, *optional*, defaults to `False`): Whether to apply a convolutional layer after patching. conv_after_patching_in_channels (`int`, *optional*, defaults to 54): Number of channels in the input of the convolutional layer after patching. conv2d_use_batchnorm (`bool`, *optional*, defaults to `True`): Whether to use batch normalization in the convolutional layer. concat_or_add_pos (`str`, *optional*, defaults to `"concat"`): How to concatenate the position encoding to the input. Can be "concat" or "add". project_pos_dim (`int`, *optional*, defaults to -1): Dimension of the position encoding to project to. If -1, no projection is applied. **position_encoding_kwargs (`Dict`, *optional*): Keyword arguments for the position encoding.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverimagepreprocessor
#perceiverimagepreprocessor
.md
407_9
models.perceiver.modeling_perceiver.PerceiverOneHotPreprocessor One-hot preprocessor for Perceiver Encoder. Can be used to add a dummy index dimension to the input. Args: config ([`PerceiverConfig`]): Model configuration.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveronehotpreprocessor
#perceiveronehotpreprocessor
.md
407_10
models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor Audio preprocessing for Perceiver Encoder. Args: config ([*PerceiverConfig*]): Model configuration. prep_type (`str`, *optional*, defaults to `"patches"`): Preprocessor type to use. Only "patches" is supported. samples_per_patch (`int`, *optional*, defaults to 96): Number of samples per patch. position_encoding_type (`str`, *optional*, defaults to `"fourier"`): Type of position encoding to use. Can be "trainable" or "fourier". concat_or_add_pos (`str`, *optional*, defaults to `"concat"`): How to concatenate the position encoding to the input. Can be "concat" or "add". out_channels (`int`, *optional*, defaults to 64): Number of channels in the output. project_pos_dim (`int`, *optional*, defaults to -1): Dimension of the position encoding to project to. If -1, no projection is applied. **position_encoding_kwargs (`Dict`, *optional*): Keyword arguments for the position encoding.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveraudiopreprocessor
#perceiveraudiopreprocessor
.md
407_11
models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor Multimodal preprocessing for Perceiver Encoder. Inputs for each modality are preprocessed, then padded with trainable position embeddings to have the same number of channels. Args: modalities (`Mapping[str, PreprocessorType]`): Dict mapping modality name to preprocessor. mask_probs (`Dict[str, float]`): Dict mapping modality name to masking probability of that modality. min_padding_size (`int`, *optional*, defaults to 2): The minimum padding size for all modalities. The final output will have num_channels equal to the maximum channels across all modalities plus min_padding_size.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodalpreprocessor
#perceivermultimodalpreprocessor
.md
407_12
models.perceiver.modeling_perceiver.PerceiverProjectionDecoder Baseline projection decoder (no cross-attention). Args: config ([`PerceiverConfig`]): Model configuration.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverprojectiondecoder
#perceiverprojectiondecoder
.md
407_13
models.perceiver.modeling_perceiver.PerceiverBasicDecoder Cross-attention-based decoder. This class can be used to decode the final hidden states of the latents using a cross-attention operation, in which the latents produce keys and values. The shape of the output of this class depends on how one defines the output queries (also called decoder queries). Args: config ([*PerceiverConfig*]): Model configuration. output_num_channels (`int`, *optional*): The number of channels in the output. Will only be used in case *final_project* is set to `True`. position_encoding_type (`str`, *optional*, defaults to "trainable"): The type of position encoding to use. Can be either "trainable", "fourier", or "none". output_index_dims (`int`, *optional*): The number of dimensions of the output queries. Ignored if 'position_encoding_type' == 'none'. num_channels (`int`, *optional*, defaults to 128): The number of channels of the decoder queries. Ignored if 'position_encoding_type' == 'none'. qk_channels (`int`, *optional*): The number of channels of the queries and keys in the cross-attention layer. v_channels (`int`, *optional*): The number of channels of the values in the cross-attention layer. num_heads (`int`, *optional*, defaults to 1): The number of attention heads in the cross-attention layer. widening_factor (`int`, *optional*, defaults to 1): The widening factor of the cross-attention layer. use_query_residual (`bool`, *optional*, defaults to `False`): Whether to use a residual connection between the query and the output of the cross-attention layer. concat_preprocessed_input (`bool`, *optional*, defaults to `False`): Whether to concatenate the preprocessed input to the query. final_project (`bool`, *optional*, defaults to `True`): Whether to project the output of the cross-attention layer to a target dimension. position_encoding_only (`bool`, *optional*, defaults to `False`): Whether to only use this class to define output queries.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverbasicdecoder
#perceiverbasicdecoder
.md
407_14
models.perceiver.modeling_perceiver.PerceiverClassificationDecoder Cross-attention based classification decoder. Light-weight wrapper of [`PerceiverBasicDecoder`] for logit output. Will turn the output of the Perceiver encoder which is of shape (batch_size, num_latents, d_latents) to a tensor of shape (batch_size, num_labels). The queries are of shape (batch_size, 1, num_labels). Args: config ([`PerceiverConfig`]): Model configuration.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverclassificationdecoder
#perceiverclassificationdecoder
.md
407_15
models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder Cross-attention based optical flow decoder.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveropticalflowdecoder
#perceiveropticalflowdecoder
.md
407_16
models.perceiver.modeling_perceiver.PerceiverBasicVideoAutoencodingDecoder Cross-attention based video-autoencoding decoder. Light-weight wrapper of [*PerceiverBasicDecoder*] with video reshaping logic. Args: config ([*PerceiverConfig*]): Model configuration. output_shape (`List[int]`): Shape of the output as (batch_size, num_frames, height, width), excluding the channel dimension. position_encoding_type (`str`): The type of position encoding to use. Can be either "trainable", "fourier", or "none".
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverbasicvideoautoencodingdecoder
#perceiverbasicvideoautoencodingdecoder
.md
407_17
models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder Multimodal decoding by composing uni-modal decoders. The *modalities* argument of the constructor is a dictionary mapping modality name to the decoder of that modality. That decoder will be used to construct queries for that modality. Modality-specific queries are padded with trainable modality-specific parameters, after which they are concatenated along the time dimension. Next, there is a shared cross attention operation across all modalities. Args: config ([*PerceiverConfig*]): Model configuration. modalities (`Dict[str, PerceiverAbstractDecoder]`): Dictionary mapping modality name to the decoder of that modality. num_outputs (`int`): The number of outputs of the decoder. output_num_channels (`int`): The number of channels in the output. min_padding_size (`int`, *optional*, defaults to 2): The minimum padding size for all modalities. The final output will have num_channels equal to the maximum channels across all modalities plus min_padding_size. subsampled_index_dims (`Dict[str, PerceiverAbstractDecoder]`, *optional*): Dictionary mapping modality name to the subsampled index dimensions to use for the decoder query of that modality.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodaldecoder
#perceivermultimodaldecoder
.md
407_18
models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor Projection postprocessing for Perceiver. Can be used to project the channels of the decoder output to a lower dimension. Args: in_channels (`int`): Number of channels in the input. out_channels (`int`): Number of channels in the output.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverprojectionpostprocessor
#perceiverprojectionpostprocessor
.md
407_19
models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor Audio postprocessing for Perceiver. Can be used to convert the decoder output to audio features. Args: config ([*PerceiverConfig*]): Model configuration. in_channels (`int`): Number of channels in the input. postproc_type (`str`, *optional*, defaults to `"patches"`): Postprocessor type to use. Currently, only "patches" is supported.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiveraudiopostprocessor
#perceiveraudiopostprocessor
.md
407_20
models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor Classification postprocessing for Perceiver. Can be used to convert the decoder output to classification logits. Args: config ([*PerceiverConfig*]): Model configuration. in_channels (`int`): Number of channels in the input.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverclassificationpostprocessor
#perceiverclassificationpostprocessor
.md
407_21
models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor Multimodal postprocessing for Perceiver. Can be used to combine modality-specific postprocessors into a single postprocessor. Args: modalities (`Mapping[str, PostprocessorType]`): Dictionary mapping modality name to postprocessor class for that modality. input_is_dict (`bool`, *optional*, defaults to `False`): If True, input is assumed to be dictionary structured, and outputs keep the same dictionary shape. If False, input is a tensor which is sliced up during postprocessing by *modality_sizes*.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermultimodalpostprocessor
#perceivermultimodalpostprocessor
.md
407_22
The Perceiver: a scalable, fully attentional architecture. <Tip> Note that it's possible to fine-tune Perceiver on higher resolution images than the ones it has been trained on, by setting `interpolate_pos_encoding` to `True` in the forward of the model. This will interpolate the pre-trained position embeddings to the higher resolution. </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. decoder (*DecoderType*, *optional*): Optional decoder to use to decode the latent representation of the encoder. Examples include *transformers.models.perceiver.modeling_perceiver.PerceiverBasicDecoder*, *transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder*, *transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder*. input_preprocessor (*PreprocessorType*, *optional*): Optional input preprocessor to use. Examples include *transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverTextPreprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor*. output_postprocessor (*PostprocessorType*, *optional*): Optional output postprocessor to use. Examples include *transformers.models.perceiver.modeling_perceiver.PerceiverImagePostprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor*, *transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor*. Note that you can define your own decoders, preprocessors and/or postprocessors to fit your use-case. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceivermodel
#perceivermodel
.md
407_23
Example use of Perceiver for masked language modeling. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformaskedlm
#perceiverformaskedlm
.md
407_24
Example use of Perceiver for text classification. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforsequenceclassification
#perceiverforsequenceclassification
.md
407_25
Example use of Perceiver for image classification, for tasks such as ImageNet. This model uses learned position embeddings. In other words, this model is not given any privileged information about the structure of images. As shown in the paper, this model can achieve a top-1 accuracy of 72.7 on ImageNet. [`PerceiverForImageClassificationLearned`] uses [`~models.perceiver.modeling_perceiver.PerceiverImagePreprocessor`] (with `prep_type="conv1x1"`) to preprocess the input images, and [`~models.perceiver.modeling_perceiver.PerceiverClassificationDecoder`] to decode the latent representation of [`PerceiverModel`] into classification logits. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationlearned
#perceiverforimageclassificationlearned
.md
407_26
Example use of Perceiver for image classification, for tasks such as ImageNet. This model uses fixed 2D Fourier position embeddings. As shown in the paper, this model can achieve a top-1 accuracy of 79.0 on ImageNet, and 84.5 when pre-trained on a large-scale dataset (i.e. JFT). [`PerceiverForImageClassificationLearned`] uses [`~models.perceiver.modeling_perceiver.PerceiverImagePreprocessor`] (with `prep_type="pixels"`) to preprocess the input images, and [`~models.perceiver.modeling_perceiver.PerceiverClassificationDecoder`] to decode the latent representation of [`PerceiverModel`] into classification logits. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationfourier
#perceiverforimageclassificationfourier
.md
407_27
Example use of Perceiver for image classification, for tasks such as ImageNet. This model uses a 2D conv+maxpool preprocessing network. As shown in the paper, this model can achieve a top-1 accuracy of 82.1 on ImageNet. [`PerceiverForImageClassificationLearned`] uses [`~models.perceiver.modeling_perceiver.PerceiverImagePreprocessor`] (with `prep_type="conv"`) to preprocess the input images, and [`~models.perceiver.modeling_perceiver.PerceiverClassificationDecoder`] to decode the latent representation of [`PerceiverModel`] into classification logits. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforimageclassificationconvprocessing
#perceiverforimageclassificationconvprocessing
.md
407_28
Example use of Perceiver for optical flow, for tasks such as Sintel and KITTI. [`PerceiverForOpticalFlow`] uses [`~models.perceiver.modeling_perceiver.PerceiverImagePreprocessor`] (with *prep_type="patches"*) to preprocess the input images, and [`~models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder`] to decode the latent representation of [`PerceiverModel`]. As input, one concatenates 2 subsequent frames along the channel dimension and extract a 3 x 3 patch around each pixel (leading to 3 x 3 x 3 x 2 = 54 values for each pixel). Fixed Fourier position encodings are used to encode the position of each pixel in the patch. Next, one applies the Perceiver encoder. To decode, one queries the latent representation using the same encoding used for the input. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverforopticalflow
#perceiverforopticalflow
.md
407_29
Example use of Perceiver for multimodal (video) autoencoding, for tasks such as Kinetics-700. [`PerceiverForMultimodalAutoencoding`] uses [`~models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor`] to preprocess the 3 modalities: images, audio and class labels. This preprocessor uses modality-specific preprocessors to preprocess every modality separately, after which they are concatenated. Trainable position embeddings are used to pad each modality to the same number of channels to make concatenation along the time dimension possible. Next, one applies the Perceiver encoder. [`~models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder`] is used to decode the latent representation of [`PerceiverModel`]. This decoder uses each modality-specific decoder to construct queries. The decoder queries are created based on the inputs after preprocessing. However, autoencoding an entire video in a single forward pass is computationally infeasible, hence one only uses parts of the decoder queries to do cross-attention with the latent representation. This is determined by the subsampled indices for each modality, which can be provided as additional input to the forward pass of [`PerceiverForMultimodalAutoencoding`]. [`~models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder`] also pads the decoder queries of the different modalities to the same number of channels, in order to concatenate them along the time dimension. Next, cross-attention is performed with the latent representation of [`PerceiverModel`]. Finally, [`~models.perceiver.modeling_perceiver.PerceiverMultiModalPostprocessor`] is used to turn this tensor into an actual video. It first splits up the output into the different modalities, and then applies the respective postprocessor for each modality. Note that, by masking the classification label during evaluation (i.e. simply providing a tensor of zeros for the "label" modality), this auto-encoding model becomes a Kinetics 700 video classifier. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PerceiverConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/perceiver.md
https://huggingface.co/docs/transformers/en/model_doc/perceiver/#perceiverformultimodalautoencoding
#perceiverformultimodalautoencoding
.md
407_30
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/
.md
408_0
The X-MOD model was proposed in [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. X-MOD extends multilingual masked language models like [XLM-R](xlm-roberta) to include language-specific modular components (_language adapters_) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen. The abstract from the paper is the following: *Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.* This model was contributed by [jvamvas](https://huggingface.co/jvamvas). The original code can be found [here](https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/fairseq/models/xmod) and the original documentation is found [here](https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/examples/xmod).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#overview
#overview
.md
408_1
Tips: - X-MOD is similar to [XLM-R](xlm-roberta), but a difference is that the input language needs to be specified so that the correct language adapter can be activated. - The main models – base and large – have adapters for 81 languages.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#usage-tips
#usage-tips
.md
408_2
There are two ways to specify the input language: 1. By setting a default language before using the model: ```python from transformers import XmodModel model = XmodModel.from_pretrained("facebook/xmod-base") model.set_default_language("en_XX") ``` 2. By explicitly passing the index of the language adapter for each sample: ```python import torch input_ids = torch.tensor( [ [0, 581, 10269, 83, 99942, 136, 60742, 23, 70, 80583, 18276, 2], [0, 1310, 49083, 443, 269, 71, 5486, 165, 60429, 660, 23, 2], ] ) lang_ids = torch.LongTensor( [ 0, # en_XX 8, # de_DE ] ) output = model(input_ids, lang_ids=lang_ids) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#input-language
#input-language
.md
408_3
The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided: ```python model.freeze_embeddings_and_language_adapters() # Fine-tune the model ... ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#fine-tuning
#fine-tuning
.md
408_4
After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language: ```python model.set_default_language("de_DE") # Evaluate the model on German examples ... ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#cross-lingual-transfer
#cross-lingual-transfer
.md
408_5
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#resources
#resources
.md
408_6
This is the configuration class to store the configuration of a [`XmodModel`]. It is used to instantiate an X-MOD model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [facebook/xmod-base](https://huggingface.co/facebook/xmod-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the X-MOD model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`XmodModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`XmodModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. pre_norm (`bool`, *optional*, defaults to `False`): Whether to apply layer normalization before each block. adapter_reduction_factor (`int` or `float`, *optional*, defaults to 2): The factor by which the dimensionality of the adapter is reduced relative to `hidden_size`. adapter_layer_norm (`bool`, *optional*, defaults to `False`): Whether to apply a new layer normalization before the adapter modules (shared across all adapters). adapter_reuse_layer_norm (`bool`, *optional*, defaults to `True`): Whether to reuse the second layer normalization and apply it before the adapter modules as well. ln_before_adapter (`bool`, *optional*, defaults to `True`): Whether to apply the layer normalization before the residual connection around the adapter module. languages (`Iterable[str]`, *optional*, defaults to `["en_XX"]`): An iterable of language codes for which adapter modules should be initialized. default_language (`str`, *optional*): Language code of a default language. It will be assumed that the input is in this language if no language codes are explicitly passed to the forward method. Examples: ```python >>> from transformers import XmodConfig, XmodModel >>> # Initializing an X-MOD facebook/xmod-base style configuration >>> configuration = XmodConfig() >>> # Initializing a model (with random weights) from the facebook/xmod-base style configuration >>> model = XmodModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodconfig
#xmodconfig
.md
408_7
The bare X-MOD Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in *Attention is all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. .. _*Attention is all you need*: https://arxiv.org/abs/1706.03762 Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodmodel
#xmodmodel
.md
408_8
X-MOD Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforcausallm
#xmodforcausallm
.md
408_9
X-MOD Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodformaskedlm
#xmodformaskedlm
.md
408_10
X-MOD Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforsequenceclassification
#xmodforsequenceclassification
.md
408_11
X-MOD Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodformultiplechoice
#xmodformultiplechoice
.md
408_12
X-MOD Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodfortokenclassification
#xmodfortokenclassification
.md
408_13
X-MOD Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XmodConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xmod.md
https://huggingface.co/docs/transformers/en/model_doc/xmod/#xmodforquestionanswering
#xmodforquestionanswering
.md
408_14