text
stringlengths
5
58.6k
source
stringclasses
470 values
url
stringlengths
49
167
source_section
stringlengths
0
90
file_type
stringclasses
1 value
id
stringlengths
3
6
No docstring available for TFElectraModel Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectramodel
#tfelectramodel
.md
353_17
No docstring available for TFElectraForPreTraining Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraforpretraining
#tfelectraforpretraining
.md
353_18
No docstring available for TFElectraForMaskedLM Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraformaskedlm
#tfelectraformaskedlm
.md
353_19
No docstring available for TFElectraForSequenceClassification Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraforsequenceclassification
#tfelectraforsequenceclassification
.md
353_20
No docstring available for TFElectraForMultipleChoice Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraformultiplechoice
#tfelectraformultiplechoice
.md
353_21
No docstring available for TFElectraForTokenClassification Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectrafortokenclassification
#tfelectrafortokenclassification
.md
353_22
No docstring available for TFElectraForQuestionAnswering Methods: call </tf> <jax>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#tfelectraforquestionanswering
#tfelectraforquestionanswering
.md
353_23
No docstring available for FlaxElectraModel Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectramodel
#flaxelectramodel
.md
353_24
No docstring available for FlaxElectraForPreTraining Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraforpretraining
#flaxelectraforpretraining
.md
353_25
No docstring available for FlaxElectraForCausalLM Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraforcausallm
#flaxelectraforcausallm
.md
353_26
No docstring available for FlaxElectraForMaskedLM Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraformaskedlm
#flaxelectraformaskedlm
.md
353_27
No docstring available for FlaxElectraForSequenceClassification Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraforsequenceclassification
#flaxelectraforsequenceclassification
.md
353_28
No docstring available for FlaxElectraForMultipleChoice Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraformultiplechoice
#flaxelectraformultiplechoice
.md
353_29
No docstring available for FlaxElectraForTokenClassification Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectrafortokenclassification
#flaxelectrafortokenclassification
.md
353_30
No docstring available for FlaxElectraForQuestionAnswering Methods: __call__ </jax> </frameworkcontent>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/electra.md
https://huggingface.co/docs/transformers/en/model_doc/electra/#flaxelectraforquestionanswering
#flaxelectraforquestionanswering
.md
353_31
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/
.md
354_0
The RoBERTa-PreLayerNorm model was proposed in [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. It is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/). The abstract from the paper is the following: *fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs.* This model was contributed by [andreasmaden](https://huggingface.co/andreasmadsen). The original code can be found [here](https://github.com/princeton-nlp/DinkyTrain).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#overview
#overview
.md
354_1
- The implementation is the same as [Roberta](roberta) except instead of using _Add and Norm_ it does _Norm and Add_. _Add_ and _Norm_ refers to the Addition and LayerNormalization as described in [Attention Is All You Need](https://arxiv.org/abs/1706.03762). - This is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#usage-tips
#usage-tips
.md
354_2
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#resources
#resources
.md
354_3
This is the configuration class to store the configuration of a [`RobertaPreLayerNormModel`] or a [`TFRobertaPreLayerNormModel`]. It is used to instantiate a RoBERTa-PreLayerNorm model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RoBERTa-PreLayerNorm [andreasmadsen/efficient_mlm_m0.40](https://huggingface.co/andreasmadsen/efficient_mlm_m0.40) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50265): Vocabulary size of the RoBERTa-PreLayerNorm model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`RobertaPreLayerNormModel`] or [`TFRobertaPreLayerNormModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`RobertaPreLayerNormModel`] or [`TFRobertaPreLayerNormModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. Examples: ```python >>> from transformers import RobertaPreLayerNormConfig, RobertaPreLayerNormModel >>> # Initializing a RoBERTa-PreLayerNorm configuration >>> configuration = RobertaPreLayerNormConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = RobertaPreLayerNormModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormconfig
#robertaprelayernormconfig
.md
354_4
The bare RoBERTa-PreLayerNorm Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in *Attention is all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. .. _*Attention is all you need*: https://arxiv.org/abs/1706.03762 Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormmodel
#robertaprelayernormmodel
.md
354_5
RoBERTa-PreLayerNorm Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforcausallm
#robertaprelayernormforcausallm
.md
354_6
RoBERTa-PreLayerNorm Model with a `language modeling` head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormformaskedlm
#robertaprelayernormformaskedlm
.md
354_7
RoBERTa-PreLayerNorm Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforsequenceclassification
#robertaprelayernormforsequenceclassification
.md
354_8
RobertaPreLayerNorm Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormformultiplechoice
#robertaprelayernormformultiplechoice
.md
354_9
RobertaPreLayerNorm Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormfortokenclassification
#robertaprelayernormfortokenclassification
.md
354_10
RobertaPreLayerNorm Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RobertaPreLayerNormConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#robertaprelayernormforquestionanswering
#robertaprelayernormforquestionanswering
.md
354_11
No docstring available for TFRobertaPreLayerNormModel Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormmodel
#tfrobertaprelayernormmodel
.md
354_12
No docstring available for TFRobertaPreLayerNormForCausalLM Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormforcausallm
#tfrobertaprelayernormforcausallm
.md
354_13
No docstring available for TFRobertaPreLayerNormForMaskedLM Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormformaskedlm
#tfrobertaprelayernormformaskedlm
.md
354_14
No docstring available for TFRobertaPreLayerNormForSequenceClassification Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormforsequenceclassification
#tfrobertaprelayernormforsequenceclassification
.md
354_15
No docstring available for TFRobertaPreLayerNormForMultipleChoice Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormformultiplechoice
#tfrobertaprelayernormformultiplechoice
.md
354_16
No docstring available for TFRobertaPreLayerNormForTokenClassification Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormfortokenclassification
#tfrobertaprelayernormfortokenclassification
.md
354_17
No docstring available for TFRobertaPreLayerNormForQuestionAnswering Methods: call </tf> <jax>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#tfrobertaprelayernormforquestionanswering
#tfrobertaprelayernormforquestionanswering
.md
354_18
No docstring available for FlaxRobertaPreLayerNormModel Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormmodel
#flaxrobertaprelayernormmodel
.md
354_19
No docstring available for FlaxRobertaPreLayerNormForCausalLM Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormforcausallm
#flaxrobertaprelayernormforcausallm
.md
354_20
No docstring available for FlaxRobertaPreLayerNormForMaskedLM Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormformaskedlm
#flaxrobertaprelayernormformaskedlm
.md
354_21
No docstring available for FlaxRobertaPreLayerNormForSequenceClassification Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormforsequenceclassification
#flaxrobertaprelayernormforsequenceclassification
.md
354_22
No docstring available for FlaxRobertaPreLayerNormForMultipleChoice Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormformultiplechoice
#flaxrobertaprelayernormformultiplechoice
.md
354_23
No docstring available for FlaxRobertaPreLayerNormForTokenClassification Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormfortokenclassification
#flaxrobertaprelayernormfortokenclassification
.md
354_24
No docstring available for FlaxRobertaPreLayerNormForQuestionAnswering Methods: __call__ </jax> </frameworkcontent>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roberta-prelayernorm.md
https://huggingface.co/docs/transformers/en/model_doc/roberta-prelayernorm/#flaxrobertaprelayernormforquestionanswering
#flaxrobertaprelayernormforquestionanswering
.md
354_25
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/
.md
355_0
The MobileViTV2 model was proposed in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari. MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention. The abstract from the paper is the following: *Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTV2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTV2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running 3.2× faster on a mobile device.* This model was contributed by [shehan97](https://huggingface.co/shehan97). The original code can be found [here](https://github.com/apple/ml-cvnets).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#overview
#overview
.md
355_1
- MobileViTV2 is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. - One can use [`MobileViTImageProcessor`] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB). - The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - The segmentation model uses a [DeepLabV3](https://arxiv.org/abs/1706.05587) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#usage-tips
#usage-tips
.md
355_2
This is the configuration class to store the configuration of a [`MobileViTV2Model`]. It is used to instantiate a MobileViTV2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MobileViTV2 [apple/mobilevitv2-1.0](https://huggingface.co/apple/mobilevitv2-1.0) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_channels (`int`, *optional*, defaults to 3): The number of input channels. image_size (`int`, *optional*, defaults to 256): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 2): The size (resolution) of each patch. expand_ratio (`float`, *optional*, defaults to 2.0): Expansion factor for the MobileNetv2 layers. hidden_act (`str` or `function`, *optional*, defaults to `"swish"`): The non-linear activation function (function or string) in the Transformer encoder and convolution layers. conv_kernel_size (`int`, *optional*, defaults to 3): The size of the convolutional kernel in the MobileViTV2 layer. output_stride (`int`, *optional*, defaults to 32): The ratio of the spatial resolution of the output to the resolution of the input image. classifier_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for attached classifiers. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. aspp_out_channels (`int`, *optional*, defaults to 512): Number of output channels used in the ASPP layer for semantic segmentation. atrous_rates (`List[int]`, *optional*, defaults to `[6, 12, 18]`): Dilation (atrous) factors used in the ASPP layer for semantic segmentation. aspp_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the ASPP layer for semantic segmentation. semantic_loss_ignore_index (`int`, *optional*, defaults to 255): The index that is ignored by the loss function of the semantic segmentation model. n_attn_blocks (`List[int]`, *optional*, defaults to `[2, 4, 3]`): The number of attention blocks in each MobileViTV2Layer base_attn_unit_dims (`List[int]`, *optional*, defaults to `[128, 192, 256]`): The base multiplier for dimensions of attention blocks in each MobileViTV2Layer width_multiplier (`float`, *optional*, defaults to 1.0): The width multiplier for MobileViTV2. ffn_multiplier (`int`, *optional*, defaults to 2): The FFN multiplier for MobileViTV2. attn_dropout (`float`, *optional*, defaults to 0.0): The dropout in the attention layer. ffn_dropout (`float`, *optional*, defaults to 0.0): The dropout between FFN layers. Example: ```python >>> from transformers import MobileViTV2Config, MobileViTV2Model >>> # Initializing a mobilevitv2-small style configuration >>> configuration = MobileViTV2Config() >>> # Initializing a model from the mobilevitv2-small style configuration >>> model = MobileViTV2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2config
#mobilevitv2config
.md
355_3
The bare MobileViTV2 model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileViTV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2model
#mobilevitv2model
.md
355_4
MobileViTV2 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileViTV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2forimageclassification
#mobilevitv2forimageclassification
.md
355_5
MobileViTV2 model with a semantic segmentation head on top, e.g. for Pascal VOC. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MobileViTV2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilevitv2.md
https://huggingface.co/docs/transformers/en/model_doc/mobilevitv2/#mobilevitv2forsemanticsegmentation
#mobilevitv2forsemanticsegmentation
.md
355_6
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/
.md
356_0
The Grounding DINO model was proposed in [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang. Grounding DINO extends a closed-set object detection model with a text encoder, enabling open-set object detection. The model achieves remarkable results, such as 52.5 AP on COCO zero-shot. The abstract from the paper is the following: *In this paper, we present an open-set object detector, called Grounding DINO, by marrying Transformer-based detector DINO with grounded pre-training, which can detect arbitrary objects with human inputs such as category names or referring expressions. The key solution of open-set object detection is introducing language to a closed-set detector for open-set concept generalization. To effectively fuse language and vision modalities, we conceptually divide a closed-set detector into three phases and propose a tight fusion solution, which includes a feature enhancer, a language-guided query selection, and a cross-modality decoder for cross-modality fusion. While previous works mainly evaluate open-set object detection on novel categories, we propose to also perform evaluations on referring expression comprehension for objects specified with attributes. Grounding DINO performs remarkably well on all three settings, including benchmarks on COCO, LVIS, ODinW, and RefCOCO/+/g. Grounding DINO achieves a 52.5 AP on the COCO detection zero-shot transfer benchmark, i.e., without any training data from COCO. It sets a new record on the ODinW zero-shot benchmark with a mean 26.1 AP.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/grouding_dino_architecture.png" alt="drawing" width="600"/> <small> Grounding DINO overview. Taken from the <a href="https://arxiv.org/abs/2303.05499">original paper</a>. </small> This model was contributed by [EduardoPacheco](https://huggingface.co/EduardoPacheco) and [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/IDEA-Research/GroundingDINO).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#overview
#overview
.md
356_1
- One can use [`GroundingDinoProcessor`] to prepare image-text pairs for the model. - To separate classes in the text use a period e.g. "a cat. a dog." - When using multiple classes (e.g. `"a cat. a dog."`), use `post_process_grounded_object_detection` from [`GroundingDinoProcessor`] to post process outputs. Since, the labels returned from `post_process_object_detection` represent the indices from the model dimension where prob > threshold. Here's how to use the model for zero-shot object detection: ```python >>> import requests >>> import torch >>> from PIL import Image >>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection >>> model_id = "IDEA-Research/grounding-dino-tiny" >>> device = "cuda" >>> processor = AutoProcessor.from_pretrained(model_id) >>> model = AutoModelForZeroShotObjectDetection.from_pretrained(model_id).to(device) >>> image_url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(image_url, stream=True).raw) >>> # Check for cats and remote controls >>> text_labels = [["a cat", "a remote control"]] >>> inputs = processor(images=image, text=text_labels, return_tensors="pt").to(device) >>> with torch.no_grad(): ... outputs = model(**inputs) >>> results = processor.post_process_grounded_object_detection( ... outputs, ... threshold=0.4, ... text_threshold=0.3, ... target_sizes=[(image.height, image.width)] ... ) >>> # Retrieve the first image result >>> result = results[0] >>> for box, score, text_label in zip(result["boxes"], result["scores"], result["text_labels"]): ... box = [round(x, 2) for x in box.tolist()] ... print(f"Detected {text_label} with confidence {round(score.item(), 3)} at location {box}") Detected a cat with confidence 0.479 at location [344.7, 23.11, 637.18, 374.28] Detected a cat with confidence 0.438 at location [12.27, 51.91, 316.86, 472.44] Detected a remote control with confidence 0.478 at location [38.57, 70.0, 176.78, 118.18] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#usage-tips
#usage-tips
.md
356_2
One can combine Grounding DINO with the [Segment Anything](sam) model for text-based mask generation as introduced in [Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks](https://arxiv.org/abs/2401.14159). You can refer to this [demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Grounding%20DINO/GroundingDINO_with_Segment_Anything.ipynb) 🌍 for details. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/grounded_sam.png" alt="drawing" width="900"/> <small> Grounded SAM overview. Taken from the <a href="https://github.com/IDEA-Research/Grounded-Segment-Anything">original repository</a>. </small>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#grounded-sam
#grounded-sam
.md
356_3
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Grounding DINO. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - Demo notebooks regarding inference with Grounding DINO as well as combining it with [SAM](sam) can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Grounding%20DINO). 🌎
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#resources
#resources
.md
356_4
Constructs a Grounding DINO image processor. Args: format (`str`, *optional*, defaults to `AnnotationFormat.COCO_DETECTION`): Data format of the annotations. One of "coco_detection" or "coco_panoptic". do_resize (`bool`, *optional*, defaults to `True`): Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 800, "longest_edge": 1333}`): Size of the image's `(height, width)` dimensions after resizing. Can be overridden by the `size` parameter in the `preprocess` method. Available options are: - `{"height": int, "width": int}`: The image will be resized to the exact size `(height, width)`. Do NOT keep the aspect ratio. - `{"shortest_edge": int, "longest_edge": int}`: The image will be resized to a maximum size respecting the aspect ratio and keeping the shortest edge less or equal to `shortest_edge` and the longest edge less or equal to `longest_edge`. - `{"max_height": int, "max_width": int}`: The image will be resized to the maximum size respecting the aspect ratio and keeping the height less or equal to `max_height` and the width less or equal to `max_width`. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. do_rescale (`bool`, *optional*, defaults to `True`): Controls whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. Controls whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`): Mean values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`): Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the `image_std` parameter in the `preprocess` method. do_convert_annotations (`bool`, *optional*, defaults to `True`): Controls whether to convert the annotations to the format expected by the DETR model. Converts the bounding boxes to the format `(center_x, center_y, width, height)` and in the range `[0, 1]`. Can be overridden by the `do_convert_annotations` parameter in the `preprocess` method. do_pad (`bool`, *optional*, defaults to `True`): Controls whether to pad the image. Can be overridden by the `do_pad` parameter in the `preprocess` method. If `True`, padding will be applied to the bottom and right of the image with zeros. If `pad_size` is provided, the image will be padded to the specified dimensions. Otherwise, the image will be padded to the maximum height and width of the batch. pad_size (`Dict[str, int]`, *optional*): The size `{"height": int, "width" int}` to pad the images to. Must be larger than any image size provided for preprocessing. If `pad_size` is not provided, images will be padded to the largest height and width in the batch. Methods: preprocess - post_process_object_detection
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoimageprocessor
#groundingdinoimageprocessor
.md
356_5
Constructs a Grounding DINO processor which wraps a Deformable DETR image processor and a BERT tokenizer into a single processor. [`GroundingDinoProcessor`] offers all the functionalities of [`GroundingDinoImageProcessor`] and [`AutoTokenizer`]. See the docstring of [`~GroundingDinoProcessor.__call__`] and [`~GroundingDinoProcessor.decode`] for more information. Args: image_processor (`GroundingDinoImageProcessor`): An instance of [`GroundingDinoImageProcessor`]. The image processor is a required input. tokenizer (`AutoTokenizer`): An instance of ['PreTrainedTokenizer`]. The tokenizer is a required input. Methods: post_process_grounded_object_detection
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoprocessor
#groundingdinoprocessor
.md
356_6
This is the configuration class to store the configuration of a [`GroundingDinoModel`]. It is used to instantiate a Grounding DINO model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Grounding DINO [IDEA-Research/grounding-dino-tiny](https://huggingface.co/IDEA-Research/grounding-dino-tiny) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: backbone_config (`PretrainedConfig` or `dict`, *optional*, defaults to `ResNetConfig()`): The configuration of the backbone model. backbone (`str`, *optional*): Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone` is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights. use_pretrained_backbone (`bool`, *optional*, defaults to `False`): Whether to use pretrained weights for the backbone. use_timm_backbone (`bool`, *optional*, defaults to `False`): Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers library. backbone_kwargs (`dict`, *optional*): Keyword arguments to be passed to AutoBackbone when loading from a checkpoint e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set. text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `BertConfig`): The config object or dictionary of the text backbone. num_queries (`int`, *optional*, defaults to 900): Number of object queries, i.e. detection slots. This is the maximal number of objects [`GroundingDinoModel`] can detect in a single image. encoder_layers (`int`, *optional*, defaults to 6): Number of encoder layers. encoder_ffn_dim (`int`, *optional*, defaults to 2048): Dimension of the "intermediate" (often named feed-forward) layer in decoder. encoder_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer encoder. decoder_layers (`int`, *optional*, defaults to 6): Number of decoder layers. decoder_ffn_dim (`int`, *optional*, defaults to 2048): Dimension of the "intermediate" (often named feed-forward) layer in decoder. decoder_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer decoder. is_encoder_decoder (`bool`, *optional*, defaults to `True`): Whether the model is used as an encoder/decoder or not. activation_function (`str` or `function`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. d_model (`int`, *optional*, defaults to 256): Dimension of the layers. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. auxiliary_loss (`bool`, *optional*, defaults to `False`): Whether auxiliary decoding losses (loss at each decoder layer) are to be used. position_embedding_type (`str`, *optional*, defaults to `"sine"`): Type of position embeddings to be used on top of the image features. One of `"sine"` or `"learned"`. num_feature_levels (`int`, *optional*, defaults to 4): The number of input feature levels. encoder_n_points (`int`, *optional*, defaults to 4): The number of sampled keys in each feature level for each attention head in the encoder. decoder_n_points (`int`, *optional*, defaults to 4): The number of sampled keys in each feature level for each attention head in the decoder. two_stage (`bool`, *optional*, defaults to `True`): Whether to apply a two-stage deformable DETR, where the region proposals are also generated by a variant of Grounding DINO, which are further fed into the decoder for iterative bounding box refinement. class_cost (`float`, *optional*, defaults to 1.0): Relative weight of the classification error in the Hungarian matching cost. bbox_cost (`float`, *optional*, defaults to 5.0): Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost. giou_cost (`float`, *optional*, defaults to 2.0): Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost. bbox_loss_coefficient (`float`, *optional*, defaults to 5.0): Relative weight of the L1 bounding box loss in the object detection loss. giou_loss_coefficient (`float`, *optional*, defaults to 2.0): Relative weight of the generalized IoU loss in the object detection loss. focal_alpha (`float`, *optional*, defaults to 0.25): Alpha parameter in the focal loss. disable_custom_kernels (`bool`, *optional*, defaults to `False`): Disable the use of custom CUDA and CPU kernels. This option is necessary for the ONNX export, as custom kernels are not supported by PyTorch ONNX export. max_text_len (`int`, *optional*, defaults to 256): The maximum length of the text input. text_enhancer_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the text enhancer. fusion_droppath (`float`, *optional*, defaults to 0.1): The droppath ratio for the fusion module. fusion_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the fusion module. embedding_init_target (`bool`, *optional*, defaults to `True`): Whether to initialize the target with Embedding weights. query_dim (`int`, *optional*, defaults to 4): The dimension of the query vector. decoder_bbox_embed_share (`bool`, *optional*, defaults to `True`): Whether to share the bbox regression head for all decoder layers. two_stage_bbox_embed_share (`bool`, *optional*, defaults to `False`): Whether to share the bbox embedding between the two-stage bbox generator and the region proposal generation. positional_embedding_temperature (`float`, *optional*, defaults to 20): The temperature for Sine Positional Embedding that is used together with vision backbone. init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. Examples: ```python >>> from transformers import GroundingDinoConfig, GroundingDinoModel >>> # Initializing a Grounding DINO IDEA-Research/grounding-dino-tiny style configuration >>> configuration = GroundingDinoConfig() >>> # Initializing a model (with random weights) from the IDEA-Research/grounding-dino-tiny style configuration >>> model = GroundingDinoModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoconfig
#groundingdinoconfig
.md
356_7
The bare Grounding DINO Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GroundingDinoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinomodel
#groundingdinomodel
.md
356_8
Grounding DINO Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GroundingDinoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/grounding-dino.md
https://huggingface.co/docs/transformers/en/model_doc/grounding-dino/#groundingdinoforobjectdetection
#groundingdinoforobjectdetection
.md
356_9
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/
.md
357_0
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=marian"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-marian-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/opus-mt-zh-en"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmt
#marianmt
.md
357_1
A framework for translation models, using the same models as BART. Translations should be similar, but not identical to output in the test set linked to in each model card. This model was contributed by [sshleifer](https://huggingface.co/sshleifer).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#overview
#overview
.md
357_2
- Each model is about 298 MB on disk, there are more than 1,000 models. - The list of supported language pairs can be found [here](https://huggingface.co/Helsinki-NLP). - Models were originally trained by [Jörg Tiedemann](https://researchportal.helsinki.fi/en/persons/j%C3%B6rg-tiedemann) using the [Marian](https://marian-nmt.github.io/) C++ library, which supports fast training and translation. - All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented in a model card. - The 80 opus models that require BPE preprocessing are not supported. - The modeling code is the same as [`BartForConditionalGeneration`] with a few minor modifications: - static (sinusoid) positional embeddings (`MarianConfig.static_position_embeddings=True`) - no layernorm_embedding (`MarianConfig.normalize_embedding=False`) - the model starts generating with `pad_token_id` (which has 0 as a token_embedding) as the prefix (Bart uses `<s/>`), - Code to bulk convert models can be found in `convert_marian_to_pytorch.py`.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#implementation-notes
#implementation-notes
.md
357_3
- All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}` - The language codes used to name models are inconsistent. Two digit codes can usually be found [here](https://developers.google.com/admin-sdk/directory/v1/languages), three digit codes require googling "language code {code}". - Codes formatted like `es_AR` are usually `code_{region}`. That one is Spanish from Argentina. - The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second group use a combination of ISO-639-5 codes and ISO-639-2 codes.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#naming
#naming
.md
357_4
- Since Marian models are smaller than many other translation models available in the library, they can be useful for fine-tuning experiments and integration tests. - [Fine-tune on GPU](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/train_distil_marian_enro.sh)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#examples
#examples
.md
357_5
- All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}`: - If a model can output multiple languages, and you should specify a language code by prepending the desired output language to the `src_text`. - You can see a models's supported language codes in its model card, under target constituents, like in [opus-mt-en-roa](https://huggingface.co/Helsinki-NLP/opus-mt-en-roa). - Note that if a model is only multilingual on the source side, like `Helsinki-NLP/opus-mt-roa-en`, no language codes are required. New multi-lingual models from the [Tatoeba-Challenge repo](https://github.com/Helsinki-NLP/Tatoeba-Challenge) require 3 character language codes: ```python >>> from transformers import MarianMTModel, MarianTokenizer >>> src_text = [ ... ">>fra<< this is a sentence in english that we want to translate to french", ... ">>por<< This should go to portuguese", ... ">>esp<< And this to Spanish", ... ] >>> model_name = "Helsinki-NLP/opus-mt-en-roa" >>> tokenizer = MarianTokenizer.from_pretrained(model_name) >>> print(tokenizer.supported_language_codes) ['>>zlm_Latn<<', '>>mfe<<', '>>hat<<', '>>pap<<', '>>ast<<', '>>cat<<', '>>ind<<', '>>glg<<', '>>wln<<', '>>spa<<', '>>fra<<', '>>ron<<', '>>por<<', '>>ita<<', '>>oci<<', '>>arg<<', '>>min<<'] >>> model = MarianMTModel.from_pretrained(model_name) >>> translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) >>> [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español'] ``` Here is the code to see all available pretrained models on the hub: ```python from huggingface_hub import list_models model_list = list_models() org = "Helsinki-NLP" model_ids = [x.id for x in model_list if x.id.startswith(org)] suffix = [x.split("/")[1] for x in model_ids] old_style_multi_models = [f"{org}/{s}" for s in suffix if s != s.lower()] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#multilingual-models
#multilingual-models
.md
357_6
These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language group: ```python no-style ['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU', 'Helsinki-NLP/opus-mt-ROMANCE-en', 'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA', 'Helsinki-NLP/opus-mt-de-ZH', 'Helsinki-NLP/opus-mt-en-CELTIC', 'Helsinki-NLP/opus-mt-en-ROMANCE', 'Helsinki-NLP/opus-mt-es-NORWAY', 'Helsinki-NLP/opus-mt-fi-NORWAY', 'Helsinki-NLP/opus-mt-fi-ZH', 'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI', 'Helsinki-NLP/opus-mt-sv-NORWAY', 'Helsinki-NLP/opus-mt-sv-ZH'] GROUP_MEMBERS = { 'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'], 'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'], 'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'], 'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'], 'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv'] } ``` Example of translating english to many romance languages, using old-style 2 character language codes ```python >>> from transformers import MarianMTModel, MarianTokenizer >>> src_text = [ ... ">>fr<< this is a sentence in english that we want to translate to french", ... ">>pt<< This should go to portuguese", ... ">>es<< And this to Spanish", ... ] >>> model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" >>> tokenizer = MarianTokenizer.from_pretrained(model_name) >>> model = MarianMTModel.from_pretrained(model_name) >>> translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) >>> tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español'] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#old-style-multi-lingual-models
#old-style-multi-lingual-models
.md
357_7
- [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) - [Causal language modeling task guide](../tasks/language_modeling)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#resources
#resources
.md
357_8
This is the configuration class to store the configuration of a [`MarianModel`]. It is used to instantiate an Marian model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Marian [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 58101): Vocabulary size of the Marian model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`MarianModel`] or [`TFMarianModel`]. d_model (`int`, *optional*, defaults to 1024): Dimensionality of the layers and the pooler layer. encoder_layers (`int`, *optional*, defaults to 12): Number of encoder layers. decoder_layers (`int`, *optional*, defaults to 12): Number of decoder layers. encoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. encoder_ffn_dim (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. scale_embedding (`bool`, *optional*, defaults to `False`): Scale embeddings by diving by sqrt(d_model). use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models) forced_eos_token_id (`int`, *optional*, defaults to 0): The id of the token to force as the last generated token when `max_length` is reached. Usually set to `eos_token_id`. Examples: ```python >>> from transformers import MarianModel, MarianConfig >>> # Initializing a Marian Helsinki-NLP/opus-mt-en-de style configuration >>> configuration = MarianConfig() >>> # Initializing a model from the Helsinki-NLP/opus-mt-en-de style configuration >>> model = MarianModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianconfig
#marianconfig
.md
357_9
Construct a Marian tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: source_spm (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that contains the vocabulary for the source language. target_spm (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that contains the vocabulary for the target language. source_lang (`str`, *optional*): A string representing the source language. target_lang (`str`, *optional*): A string representing the target language. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. model_max_length (`int`, *optional*, defaults to 512): The maximum sentence length the model accepts. additional_special_tokens (`List[str]`, *optional*, defaults to `["<eop>", "<eod>"]`): Additional special tokens used by the tokenizer. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Examples: ```python >>> from transformers import MarianForCausalLM, MarianTokenizer >>> model = MarianForCausalLM.from_pretrained("Helsinki-NLP/opus-mt-en-de") >>> tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") >>> src_texts = ["I am a small frog.", "Tom asked his teacher for advice."] >>> tgt_texts = ["Ich bin ein kleiner Frosch.", "Tom bat seinen Lehrer um Rat."] # optional >>> inputs = tokenizer(src_texts, text_target=tgt_texts, return_tensors="pt", padding=True) >>> outputs = model(**inputs) # should work ``` Methods: build_inputs_with_special_tokens <frameworkcontent> <pt>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#mariantokenizer
#mariantokenizer
.md
357_10
The bare Marian Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MarianConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmodel
#marianmodel
.md
357_11
The Marian Model with a language modeling head. Can be used for summarization. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MarianConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianmtmodel
#marianmtmodel
.md
357_12
No docstring available for MarianForCausalLM Methods: forward </pt> <tf>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#marianforcausallm
#marianforcausallm
.md
357_13
No docstring available for TFMarianModel Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#tfmarianmodel
#tfmarianmodel
.md
357_14
No docstring available for TFMarianMTModel Methods: call </tf> <jax>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#tfmarianmtmodel
#tfmarianmtmodel
.md
357_15
No docstring available for FlaxMarianModel Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#flaxmarianmodel
#flaxmarianmodel
.md
357_16
No docstring available for FlaxMarianMTModel Methods: __call__ </jax> </frameworkcontent>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/marian.md
https://huggingface.co/docs/transformers/en/model_doc/marian/#flaxmarianmtmodel
#flaxmarianmtmodel
.md
357_17
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/
.md
358_0
The [`VisionTextDualEncoderModel`] can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit)) and any pretrained text autoencoding model as the text encoder (*e.g.* [RoBERTa](roberta), [BERT](bert)). Two projection layers are added on top of both the vision and text encoder to project the output embeddings to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text training and then can be used for zero-shot vision tasks such image-classification or retrieval. In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on new zero-shot vision tasks such as image classification or retrieval.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#overview
#overview
.md
358_1
[`VisionTextDualEncoderConfig`] is the configuration class to store the configuration of a [`VisionTextDualEncoderModel`]. It is used to instantiate [`VisionTextDualEncoderModel`] model according to the specified arguments, defining the text model and vision model configs. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: projection_dim (`int`, *optional*, defaults to 512): Dimensionality of text and vision projection layers. logit_scale_init_value (`float`, *optional*, defaults to 2.6592): The initial value of the *logit_scale* parameter. Default is used as per the original CLIP implementation. kwargs (*optional*): Dictionary of keyword arguments. Examples: ```python >>> from transformers import ViTConfig, BertConfig, VisionTextDualEncoderConfig, VisionTextDualEncoderModel >>> # Initializing a BERT and ViT configuration >>> config_vision = ViTConfig() >>> config_text = BertConfig() >>> config = VisionTextDualEncoderConfig.from_vision_text_configs(config_vision, config_text, projection_dim=512) >>> # Initializing a BERT and ViT model (with random weights) >>> model = VisionTextDualEncoderModel(config=config) >>> # Accessing the model configuration >>> config_vision = model.config.vision_config >>> config_text = model.config.text_config >>> # Saving the model, including its configuration >>> model.save_pretrained("vit-bert") >>> # loading model and config from pretrained folder >>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained("vit-bert") >>> model = VisionTextDualEncoderModel.from_pretrained("vit-bert", config=vision_text_config) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencoderconfig
#visiontextdualencoderconfig
.md
358_2
Constructs a VisionTextDualEncoder processor which wraps an image processor and a tokenizer into a single processor. [`VisionTextDualEncoderProcessor`] offers all the functionalities of [`AutoImageProcessor`] and [`AutoTokenizer`]. See the [`~VisionTextDualEncoderProcessor.__call__`] and [`~VisionTextDualEncoderProcessor.decode`] for more information. Args: image_processor ([`AutoImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`PreTrainedTokenizer`], *optional*): The tokenizer is a required input. <frameworkcontent> <pt>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencoderprocessor
#visiontextdualencoderprocessor
.md
358_3
This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the [`~AutoModel.from_pretrained`] method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling. In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval. After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`VisionEncoderDecoderConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#visiontextdualencodermodel
#visiontextdualencodermodel
.md
358_4
No docstring available for FlaxVisionTextDualEncoderModel Methods: __call__ </tf> <jax>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#flaxvisiontextdualencodermodel
#flaxvisiontextdualencodermodel
.md
358_5
No docstring available for TFVisionTextDualEncoderModel Methods: call </jax> </frameworkcontent>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vision-text-dual-encoder.md
https://huggingface.co/docs/transformers/en/model_doc/vision-text-dual-encoder/#tfvisiontextdualencodermodel
#tfvisiontextdualencodermodel
.md
358_6
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/
.md
359_0
The NLLB model was presented in [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. The abstract of the paper is the following: *Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.* This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/facebookresearch/fairseq).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#overview
#overview
.md
359_1
- M2M100ForConditionalGeneration is the base model for both NLLB and NLLB MoE - The NLLB-MoE is very similar to the NLLB model, but it's feed forward layer is based on the implementation of SwitchTransformers. - The tokenizer is the same as the NLLB models.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#usage-tips
#usage-tips
.md
359_2
The biggest difference is the way the tokens are routed. NLLB-MoE uses a `top-2-gate` which means that for each input, only the top two experts are selected based on the highest predicted probabilities from the gating network, and the remaining experts are ignored. In `SwitchTransformers`, only the top-1 probabilities are computed, which means that tokens have less probability of being forwarded. Moreover, if a token is not routed to any expert, `SwitchTransformers` still adds its unmodified hidden states (kind of like a residual connection) while they are masked in `NLLB`'s top-2 routing mechanism.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#implementation-differences-with-switchtransformers
#implementation-differences-with-switchtransformers
.md
359_3
The available checkpoints require around 350GB of storage. Make sure to use `accelerate` if you do not have enough RAM on your machine. While generating the target text set the `forced_bos_token_id` to the target language id. The following example shows how to translate English to French using the *facebook/nllb-200-distilled-600M* model. Note that we're using the BCP-47 code for French `fra_Latn`. See [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200) for the list of all BCP-47 in the Flores 200 dataset. ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b") >>> article = "Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage." >>> inputs = tokenizer(article, return_tensors="pt") >>> translated_tokens = model.generate( ... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=50 ... ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] "Auparavant, le PDG de Ring, Jamie Siminoff, a fait remarquer que la société avait commencé lorsque sa sonnette n'était pas audible depuis son magasin dans son garage." ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#generating-with-nllb-moe
#generating-with-nllb-moe
.md
359_4
English (`eng_Latn`) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language, you should specify the BCP-47 code in the `src_lang` keyword argument of the tokenizer initialization. See example below for a translation from romanian to german: ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b", src_lang="ron_Latn") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b") >>> article = "Şeful ONU spune că nu există o soluţie militară în Siria" >>> inputs = tokenizer(article, return_tensors="pt") >>> translated_tokens = model.generate( ... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30 ... ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#generating-from-any-other-language-than-english
#generating-from-any-other-language-than-english
.md
359_5
- [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#resources
#resources
.md
359_6
This is the configuration class to store the configuration of a [`NllbMoeModel`]. It is used to instantiate an NLLB-MoE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the NLLB-MoE [facebook/nllb-moe-54b](https://huggingface.co/facebook/nllb-moe-54b) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50265): Vocabulary size of the NllbMoe model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`NllbMoeModel`] or d_model (`int`, *optional*, defaults to 1024): Dimensionality of the layers and the pooler layer. encoder_layers (`int`, *optional*, defaults to 12): Number of encoder layers. decoder_layers (`int`, *optional*, defaults to 12): Number of decoder layers. encoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. encoder_ffn_dim (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in encoder. activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. classifier_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for classifier. max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. second_expert_policy ( `str`, *optional*, default to `"all"`): The policy used for the sampling the probability of being sampled to a second expert for each token. normalize_router_prob_before_dropping (`bool`, *optional*, defaults to `True`): Whether or not to normalize the router probabilities before applying a mask based on the experts capacity (capacity dropping). batch_prioritized_routing (`bool`, *optional*, defaults to `True`): Whether or not to orders the tokens by their router probabilities before capacity dropping. This means that the tokens that have the highest probabilities will be routed before other tokens that might be further in the sequence. moe_eval_capacity_token_fraction (`float`, *optional*, defaults to 1.0): Fraction of tokens as capacity during validation, if set to negative, uses the same as training. Should be in range: (0.0, 1.0]. num_experts (`int`, *optional*, defaults to 128): Number of experts for each NllbMoeSparseMlp layer. expert_capacity (`int`, *optional*, defaults to 64): Number of tokens that can be stored in each expert. encoder_sparse_step (`int`, *optional*, defaults to 4): Frequency of the sparse layers in the encoder. 4 means that one out of 4 layers will be sparse. decoder_sparse_step (`int`, *optional*, defaults to 4): Frequency of the sparse layers in the decoder. 4 means that one out of 4 layers will be sparse. router_dtype (`str`, *optional*, default to `"float32"`): The `dtype` used for the routers. It is preferable to keep the `dtype` to `"float32"` as specified in the *selective precision* discussion in [the paper](https://arxiv.org/abs/2101.03961). router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`): Whether to ignore padding tokens when routing. if `False`, the padding tokens are not routed to any experts. router_bias (`bool`, *optional*, defaults to `False`): Whether or not the classifier of the router should have a bias. moe_token_dropout (`float`, *optional*, defualt ot 0.2): Masking rate for MoE expert output masking (EOM), which is implemented via a Dropout2d on the expert outputs. output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not to return the router logits. Only set to `True` to get the auxiliary loss when training. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Example: ```python >>> from transformers import NllbMoeModel, NllbMoeConfig >>> # Initializing a NllbMoe facebook/nllb-moe-54b style configuration >>> configuration = NllbMoeConfig() >>> # Initializing a model from the facebook/nllb-moe-54b style configuration >>> model = NllbMoeModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeconfig
#nllbmoeconfig
.md
359_7
Router using tokens choose top-2 experts assignment. This router uses the same mechanism as in NLLB-MoE from the fairseq repository. Items are sorted by router_probs and then routed to their choice of expert until the expert's expert_capacity is reached. **There is no guarantee that each token is processed by an expert**, or that each expert receives at least one token. The router combining weights are also returned to make sure that the states that are not updated will be masked. Methods: route_tokens - forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoetop2router
#nllbmoetop2router
.md
359_8
Implementation of the NLLB-MoE sparse MLP module. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoesparsemlp
#nllbmoesparsemlp
.md
359_9
The bare NllbMoe Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`NllbMoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoemodel
#nllbmoemodel
.md
359_10
The NllbMoe Model with a language modeling head. Can be used for summarization. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`NllbMoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nllb-moe.md
https://huggingface.co/docs/transformers/en/model_doc/nllb-moe/#nllbmoeforconditionalgeneration
#nllbmoeforconditionalgeneration
.md
359_11
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/
.md
360_0
<Tip warning={true}> This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2. You can do so by running the following command: `pip install -U transformers==4.40.2`. </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsan-japanese
#gptsan-japanese
.md
360_1
The GPTSAN-japanese model was released in the repository by Toshiyuki Sakamoto (tanreinama). GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM in the T5 paper, and support both Text Generation and Masked Language Modeling tasks. These basic tasks similarly can fine-tune for translation or summarization.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#overview
#overview
.md
360_2
The `generate()` method can be used to generate text using GPTSAN-Japanese model. ```python >>> from transformers import AutoModel, AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").cuda() >>> x_tok = tokenizer("は、", prefix_text="織田信長", return_tensors="pt") >>> torch.manual_seed(0) >>> gen_tok = model.generate(x_tok.input_ids.cuda(), token_type_ids=x_tok.token_type_ids.cuda(), max_new_tokens=20) >>> tokenizer.decode(gen_tok[0]) '織田信長は、2004年に『戦国BASARA』のために、豊臣秀吉' ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#usage-example
#usage-example
.md
360_3
GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models. The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text. GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original GPTSAN repository for details.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/gptsan-japanese.md
https://huggingface.co/docs/transformers/en/model_doc/gptsan-japanese/#gptsan-features
#gptsan-features
.md
360_4