text
stringlengths
5
58.6k
source
stringclasses
470 values
url
stringlengths
49
167
source_section
stringlengths
0
90
file_type
stringclasses
1 value
id
stringlengths
3
6
Wav2Vec2Bert Model with a frame classification head on top for tasks like Speaker Diarization. Wav2Vec2Bert was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Wav2Vec2BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertforaudioframeclassification
#wav2vec2bertforaudioframeclassification
.md
315_9
Wav2Vec2Bert Model with an XVector feature extraction head on top for tasks like Speaker Verification. Wav2Vec2Bert was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Wav2Vec2BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertforxvector
#wav2vec2bertforxvector
.md
315_10
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/
.md
316_0
The RoFormer model was proposed in [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. The abstract from the paper is the following: *Position encoding in transformer architecture provides supervision for dependency modeling between elements at different positions in the sequence. We investigate various methods to encode positional information in transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing experiment for English benchmark will soon be updated.* This model was contributed by [junnyu](https://huggingface.co/junnyu). The original code can be found [here](https://github.com/ZhuiyiTechnology/roformer).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#overview
#overview
.md
316_1
RoFormer is a BERT-like autoencoding model with rotary position embeddings. Rotary position embeddings have shown improved performance on classification tasks with long texts.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#usage-tips
#usage-tips
.md
316_2
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#resources
#resources
.md
316_3
This is the configuration class to store the configuration of a [`RoFormerModel`]. It is used to instantiate an RoFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RoFormer [junnyu/roformer_chinese_base](https://huggingface.co/junnyu/roformer_chinese_base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50000): Vocabulary size of the RoFormer model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`RoFormerModel`] or [`TFRoFormerModel`]. embedding_size (`int`, *optional*, defaults to None): Dimensionality of the encoder layers and the pooler layer. Defaults to the `hidden_size` if not provided. hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 1536): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 1536). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`RoFormerModel`] or [`TFRoFormerModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. is_decoder (`bool`, *optional*, defaults to `False`): Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. rotary_value (`bool`, *optional*, defaults to `False`): Whether or not apply rotary position embeddings on value layer. Example: ```python >>> from transformers import RoFormerModel, RoFormerConfig >>> # Initializing a RoFormer junnyu/roformer_chinese_base style configuration >>> configuration = RoFormerConfig() >>> # Initializing a model (with random weights) from the junnyu/roformer_chinese_base style configuration >>> model = RoFormerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformerconfig
#roformerconfig
.md
316_4
Construct a RoFormer tokenizer. Based on [Rust Jieba](https://pypi.org/project/rjieba/). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): File containing the vocabulary. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. do_basic_tokenize (`bool`, *optional*, defaults to `True`): Whether or not to do basic tokenization before WordPiece. never_split (`Iterable`, *optional*): Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` unk_token (`str`, *optional*, defaults to `"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). Example: ```python >>> from transformers import RoFormerTokenizer >>> tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_base") >>> tokenizer.tokenize("今天天气非常好。") ['今', '天', '天', '气', '非常', '好', '。'] ``` Methods: build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformertokenizer
#roformertokenizer
.md
316_5
Construct a "fast" RoFormer tokenizer (backed by HuggingFace's *tokenizers* library). [`RoFormerTokenizerFast`] is almost identical to [`BertTokenizerFast`] and runs end-to-end tokenization: punctuation splitting and wordpiece. There are some difference between them when tokenizing Chinese. This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Example: ```python >>> from transformers import RoFormerTokenizerFast >>> tokenizer = RoFormerTokenizerFast.from_pretrained("junnyu/roformer_chinese_base") >>> tokenizer.tokenize("今天天气非常好。") ['今', '天', '天', '气', '非常', '好', '。'] ``` Methods: build_inputs_with_special_tokens <frameworkcontent> <pt>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformertokenizerfast
#roformertokenizerfast
.md
316_6
The bare RoFormer Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in [Attention is all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformermodel
#roformermodel
.md
316_7
RoFormer Model with a `language modeling` head on top for CLM fine-tuning. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformerforcausallm
#roformerforcausallm
.md
316_8
RoFormer Model with a `language modeling` head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformerformaskedlm
#roformerformaskedlm
.md
316_9
RoFormer Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformerforsequenceclassification
#roformerforsequenceclassification
.md
316_10
RoFormer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformerformultiplechoice
#roformerformultiplechoice
.md
316_11
RoFormer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformerfortokenclassification
#roformerfortokenclassification
.md
316_12
RoFormer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RoFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#roformerforquestionanswering
#roformerforquestionanswering
.md
316_13
No docstring available for TFRoFormerModel Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#tfroformermodel
#tfroformermodel
.md
316_14
No docstring available for TFRoFormerForMaskedLM Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#tfroformerformaskedlm
#tfroformerformaskedlm
.md
316_15
No docstring available for TFRoFormerForCausalLM Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#tfroformerforcausallm
#tfroformerforcausallm
.md
316_16
No docstring available for TFRoFormerForSequenceClassification Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#tfroformerforsequenceclassification
#tfroformerforsequenceclassification
.md
316_17
No docstring available for TFRoFormerForMultipleChoice Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#tfroformerformultiplechoice
#tfroformerformultiplechoice
.md
316_18
No docstring available for TFRoFormerForTokenClassification Methods: call
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#tfroformerfortokenclassification
#tfroformerfortokenclassification
.md
316_19
No docstring available for TFRoFormerForQuestionAnswering Methods: call </tf> <jax>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#tfroformerforquestionanswering
#tfroformerforquestionanswering
.md
316_20
No docstring available for FlaxRoFormerModel Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#flaxroformermodel
#flaxroformermodel
.md
316_21
No docstring available for FlaxRoFormerForMaskedLM Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#flaxroformerformaskedlm
#flaxroformerformaskedlm
.md
316_22
No docstring available for FlaxRoFormerForSequenceClassification Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#flaxroformerforsequenceclassification
#flaxroformerforsequenceclassification
.md
316_23
No docstring available for FlaxRoFormerForMultipleChoice Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#flaxroformerformultiplechoice
#flaxroformerformultiplechoice
.md
316_24
No docstring available for FlaxRoFormerForTokenClassification Methods: __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#flaxroformerfortokenclassification
#flaxroformerfortokenclassification
.md
316_25
No docstring available for FlaxRoFormerForQuestionAnswering Methods: __call__ </jax> </frameworkcontent>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/roformer.md
https://huggingface.co/docs/transformers/en/model_doc/roformer/#flaxroformerforquestionanswering
#flaxroformerforquestionanswering
.md
316_26
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/
.md
317_0
The OmDet-Turbo model was proposed in [Real-time Transformer-based Open-Vocabulary Detection with Efficient Fusion Head](https://arxiv.org/abs/2403.06892) by Tiancheng Zhao, Peng Liu, Xuan He, Lu Zhang, Kyusong Lee. OmDet-Turbo incorporates components from RT-DETR and introduces a swift multimodal fusion module to achieve real-time open-vocabulary object detection capabilities while maintaining high accuracy. The base model achieves performance of up to 100.2 FPS and 53.4 AP on COCO zero-shot. The abstract from the paper is the following: *End-to-end transformer-based detectors (DETRs) have shown exceptional performance in both closed-set and open-vocabulary object detection (OVD) tasks through the integration of language modalities. However, their demanding computational requirements have hindered their practical application in real-time object detection (OD) scenarios. In this paper, we scrutinize the limitations of two leading models in the OVDEval benchmark, OmDet and Grounding-DINO, and introduce OmDet-Turbo. This novel transformer-based real-time OVD model features an innovative Efficient Fusion Head (EFH) module designed to alleviate the bottlenecks observed in OmDet and Grounding-DINO. Notably, OmDet-Turbo-Base achieves a 100.2 frames per second (FPS) with TensorRT and language cache techniques applied. Notably, in zero-shot scenarios on COCO and LVIS datasets, OmDet-Turbo achieves performance levels nearly on par with current state-of-the-art supervised models. Furthermore, it establishes new state-of-the-art benchmarks on ODinW and OVDEval, boasting an AP of 30.1 and an NMS-AP of 26.86, respectively. The practicality of OmDet-Turbo in industrial applications is underscored by its exceptional performance on benchmark datasets and superior inference speed, positioning it as a compelling choice for real-time object detection tasks.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/omdet_turbo_architecture.jpeg" alt="drawing" width="600"/> <small> OmDet-Turbo architecture overview. Taken from the <a href="https://arxiv.org/abs/2403.06892">original paper</a>. </small> This model was contributed by [yonigozlan](https://huggingface.co/yonigozlan). The original code can be found [here](https://github.com/om-ai-lab/OmDet).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#overview
#overview
.md
317_1
One unique property of OmDet-Turbo compared to other zero-shot object detection models, such as [Grounding DINO](grounding-dino), is the decoupled classes and prompt embedding structure that allows caching of text embeddings. This means that the model needs both classes and task as inputs, where classes is a list of objects we want to detect and task is the grounded text used to guide open-vocabulary detection. This approach limits the scope of the open-vocabulary detection and makes the decoding process faster. [`OmDetTurboProcessor`] is used to prepare the classes, task and image triplet. The task input is optional, and when not provided, it will default to `"Detect [class1], [class2], [class3], ..."`. To process the results from the model, one can use `post_process_grounded_object_detection` from [`OmDetTurboProcessor`]. Notably, this function takes in the input classes, as unlike other zero-shot object detection models, the decoupling of classes and task embeddings means that no decoding of the predicted class embeddings is needed in the post-processing step, and the predicted classes can be matched to the inputted ones directly.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#usage-tips
#usage-tips
.md
317_2
Here's how to load the model and prepare the inputs to perform zero-shot object detection on a single image: ```python >>> import torch >>> import requests >>> from PIL import Image >>> from transformers import AutoProcessor, OmDetTurboForObjectDetection >>> processor = AutoProcessor.from_pretrained("omlab/omdet-turbo-swin-tiny-hf") >>> model = OmDetTurboForObjectDetection.from_pretrained("omlab/omdet-turbo-swin-tiny-hf") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> text_labels = ["cat", "remote"] >>> inputs = processor(image, text=text_labels, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> # convert outputs (bounding boxes and class logits) >>> results = processor.post_process_grounded_object_detection( ... outputs, ... target_sizes=[(image.height, image.width)], ... text_labels=text_labels, ... threshold=0.3, ... nms_threshold=0.3, ... ) >>> result = results[0] >>> boxes, scores, text_labels = result["boxes"], result["scores"], result["text_labels"] >>> for box, score, text_label in zip(boxes, scores, text_labels): ... box = [round(i, 2) for i in box.tolist()] ... print(f"Detected {text_label} with confidence {round(score.item(), 3)} at location {box}") Detected remote with confidence 0.768 at location [39.89, 70.35, 176.74, 118.04] Detected cat with confidence 0.72 at location [11.6, 54.19, 314.8, 473.95] Detected remote with confidence 0.563 at location [333.38, 75.77, 370.7, 187.03] Detected cat with confidence 0.552 at location [345.15, 23.95, 639.75, 371.67] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#single-image-inference
#single-image-inference
.md
317_3
OmDet-Turbo can perform batched multi-image inference, with support for different text prompts and classes in the same batch: ```python >>> import torch >>> import requests >>> from io import BytesIO >>> from PIL import Image >>> from transformers import AutoProcessor, OmDetTurboForObjectDetection >>> processor = AutoProcessor.from_pretrained("omlab/omdet-turbo-swin-tiny-hf") >>> model = OmDetTurboForObjectDetection.from_pretrained("omlab/omdet-turbo-swin-tiny-hf") >>> url1 = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image1 = Image.open(BytesIO(requests.get(url1).content)).convert("RGB") >>> text_labels1 = ["cat", "remote"] >>> task1 = "Detect {}.".format(", ".join(text_labels1)) >>> url2 = "http://images.cocodataset.org/train2017/000000257813.jpg" >>> image2 = Image.open(BytesIO(requests.get(url2).content)).convert("RGB") >>> text_labels2 = ["boat"] >>> task2 = "Detect everything that looks like a boat." >>> url3 = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" >>> image3 = Image.open(BytesIO(requests.get(url3).content)).convert("RGB") >>> text_labels3 = ["statue", "trees"] >>> task3 = "Focus on the foreground, detect statue and trees." >>> inputs = processor( ... images=[image1, image2, image3], ... text=[text_labels1, text_labels2, text_labels3], ... task=[task1, task2, task3], ... return_tensors="pt", ... ) >>> with torch.no_grad(): ... outputs = model(**inputs) >>> # convert outputs (bounding boxes and class logits) >>> results = processor.post_process_grounded_object_detection( ... outputs, ... text_labels=[text_labels1, text_labels2, text_labels3], ... target_sizes=[(image.height, image.width) for image in [image1, image2, image3]], ... threshold=0.2, ... nms_threshold=0.3, ... ) >>> for i, result in enumerate(results): ... for score, text_label, box in zip( ... result["scores"], result["text_labels"], result["boxes"] ... ): ... box = [round(i, 1) for i in box.tolist()] ... print( ... f"Detected {text_label} with confidence " ... f"{round(score.item(), 2)} at location {box} in image {i}" ... ) Detected remote with confidence 0.77 at location [39.9, 70.4, 176.7, 118.0] in image 0 Detected cat with confidence 0.72 at location [11.6, 54.2, 314.8, 474.0] in image 0 Detected remote with confidence 0.56 at location [333.4, 75.8, 370.7, 187.0] in image 0 Detected cat with confidence 0.55 at location [345.2, 24.0, 639.8, 371.7] in image 0 Detected boat with confidence 0.32 at location [146.9, 219.8, 209.6, 250.7] in image 1 Detected boat with confidence 0.3 at location [319.1, 223.2, 403.2, 238.4] in image 1 Detected boat with confidence 0.27 at location [37.7, 220.3, 84.0, 235.9] in image 1 Detected boat with confidence 0.22 at location [407.9, 207.0, 441.7, 220.2] in image 1 Detected statue with confidence 0.73 at location [544.7, 210.2, 651.9, 502.8] in image 2 Detected trees with confidence 0.25 at location [3.9, 584.3, 391.4, 785.6] in image 2 Detected trees with confidence 0.25 at location [1.4, 621.2, 118.2, 787.8] in image 2 Detected statue with confidence 0.2 at location [428.1, 205.5, 767.3, 759.5] in image 2 ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#multi-image-inference
#multi-image-inference
.md
317_4
This is the configuration class to store the configuration of a [`OmDetTurboForObjectDetection`]. It is used to instantiate a OmDet-Turbo model according to the specified arguments, defining the model architecture Instantiating a configuration with the defaults will yield a similar configuration to that of the OmDet-Turbo [omlab/omdet-turbo-swin-tiny-hf](https://huggingface.co/omlab/omdet-turbo-swin-tiny-hf) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: text_config (`PretrainedConfig`, *optional*): The configuration of the text backbone. backbone_config (`PretrainedConfig`, *optional*): The configuration of the vision backbone. use_timm_backbone (`bool`, *optional*, defaults to `True`): Whether to use the timm for the vision backbone. backbone (`str`, *optional*, defaults to `"swin_tiny_patch4_window7_224"`): The name of the pretrained vision backbone to use. If `use_pretrained_backbone=False` a randomly initialized backbone with the same architecture `backbone` is used. backbone_kwargs (`dict`, *optional*): Additional kwargs for the vision backbone. use_pretrained_backbone (`bool`, *optional*, defaults to `False`): Whether to use a pretrained vision backbone. apply_layernorm_after_vision_backbone (`bool`, *optional*, defaults to `True`): Whether to apply layer normalization on the feature maps of the vision backbone output. image_size (`int`, *optional*, defaults to 640): The size (resolution) of each image. disable_custom_kernels (`bool`, *optional*, defaults to `False`): Whether to disable custom kernels. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon value for layer normalization. batch_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon value for batch normalization. init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. text_projection_in_dim (`int`, *optional*, defaults to 512): The input dimension for the text projection. text_projection_out_dim (`int`, *optional*, defaults to 512): The output dimension for the text projection. task_encoder_hidden_dim (`int`, *optional*, defaults to 1024): The feedforward dimension for the task encoder. class_embed_dim (`int`, *optional*, defaults to 512): The dimension of the classes embeddings. class_distance_type (`str`, *optional*, defaults to `"cosine"`): The type of of distance to compare predicted classes to projected classes embeddings. Can be `"cosine"` or `"dot"`. num_queries (`int`, *optional*, defaults to 900): The number of queries. csp_activation (`str`, *optional*, defaults to `"silu"`): The activation function of the Cross Stage Partial (CSP) networks of the encoder. conv_norm_activation (`str`, *optional*, defaults to `"gelu"`): The activation function of the ConvNormLayer layers of the encoder. encoder_feedforward_activation (`str`, *optional*, defaults to `"relu"`): The activation function for the feedforward network of the encoder. encoder_feedforward_dropout (`float`, *optional*, defaults to 0.0): The dropout rate following the activation of the encoder feedforward network. encoder_dropout (`float`, *optional*, defaults to 0.0): The dropout rate of the encoder multi-head attention module. hidden_expansion (`int`, *optional*, defaults to 1): The hidden expansion of the CSP networks in the encoder. vision_features_channels (`tuple(int)`, *optional*, defaults to `[256, 256, 256]`): The projected vision features channels used as inputs for the decoder. encoder_hidden_dim (`int`, *optional*, defaults to 256): The hidden dimension of the encoder. encoder_in_channels (`List(int)`, *optional*, defaults to `[192, 384, 768]`): The input channels for the encoder. encoder_projection_indices (`List(int)`, *optional*, defaults to `[2]`): The indices of the input features projected by each layers. encoder_attention_heads (`int`, *optional*, defaults to 8): The number of attention heads for the encoder. encoder_dim_feedforward (`int`, *optional*, defaults to 2048): The feedforward dimension for the encoder. encoder_layers (`int`, *optional*, defaults to 1): The number of layers in the encoder. positional_encoding_temperature (`int`, *optional*, defaults to 10000): The positional encoding temperature in the encoder. num_feature_levels (`int`, *optional*, defaults to 3): The number of feature levels for the multi-scale deformable attention module of the decoder. decoder_hidden_dim (`int`, *optional*, defaults to 256): The hidden dimension of the decoder. decoder_num_heads (`int`, *optional*, defaults to 8): The number of heads for the decoder. decoder_num_layers (`int`, *optional*, defaults to 6): The number of layers for the decoder. decoder_activation (`str`, *optional*, defaults to `"relu"`): The activation function for the decoder. decoder_dim_feedforward (`int`, *optional*, defaults to 2048): The feedforward dimension for the decoder. decoder_num_points (`int`, *optional*, defaults to 4): The number of points sampled in the decoder multi-scale deformable attention module. decoder_dropout (`float`, *optional*, defaults to 0.0): The dropout rate for the decoder. eval_size (`Tuple[int, int]`, *optional*): Height and width used to computes the effective height and width of the position embeddings after taking into account the stride (see RTDetr). learn_initial_query (`bool`, *optional*, defaults to `False`): Whether to learn the initial query. cache_size (`int`, *optional*, defaults to 100): The cache size for the classes and prompts caches. is_encoder_decoder (`bool`, *optional*, defaults to `True`): Whether the model is used as an encoder-decoder model or not. kwargs (`Dict[str, Any]`, *optional*): Additional parameters from the architecture. The values in kwargs will be saved as part of the configuration and can be used to control the model outputs. Examples: ```python >>> from transformers import OmDetTurboConfig, OmDetTurboForObjectDetection >>> # Initializing a OmDet-Turbo omlab/omdet-turbo-swin-tiny-hf style configuration >>> configuration = OmDetTurboConfig() >>> # Initializing a model (with random weights) from the omlab/omdet-turbo-swin-tiny-hf style configuration >>> model = OmDetTurboForObjectDetection(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboconfig
#omdetturboconfig
.md
317_5
Constructs a OmDet-Turbo processor which wraps a Deformable DETR image processor and an AutoTokenizer into a single processor. [`OmDetTurboProcessor`] offers all the functionalities of [`DetrImageProcessor`] and [`AutoTokenizer`]. See the docstring of [`~OmDetTurboProcessor.__call__`] and [`~OmDetTurboProcessor.decode`] for more information. Args: image_processor (`DetrImageProcessor`): An instance of [`DetrImageProcessor`]. The image processor is a required input. tokenizer (`AutoTokenizer`): An instance of ['PreTrainedTokenizer`]. The tokenizer is a required input. Methods: post_process_grounded_object_detection
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboprocessor
#omdetturboprocessor
.md
317_6
OmDetTurbo Model (consisting of a vision and a text backbone, and encoder-decoder architecture) outputting bounding boxes and classes scores for tasks such as COCO detection. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`OmDetTurboConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/omdet-turbo.md
https://huggingface.co/docs/transformers/en/model_doc/omdet-turbo/#omdetturboforobjectdetection
#omdetturboforobjectdetection
.md
317_7
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/
.md
318_0
The OWL-ViT (short for Vision Transformer for Open-World Localization) was proposed in [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWL-ViT is an open-vocabulary object detection network trained on a variety of (image, text) pairs. It can be used to query an image with one or multiple text queries to search for and detect target objects described in text. The abstract from the paper is the following: *Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/owlvit_architecture.jpg" alt="drawing" width="600"/> <small> OWL-ViT architecture. Taken from the <a href="https://arxiv.org/abs/2205.06230">original paper</a>. </small> This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#overview
#overview
.md
318_1
OWL-ViT is a zero-shot text-conditioned object detection model. OWL-ViT uses [CLIP](clip) as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection. [`OwlViTImageProcessor`] can be used to resize (or rescale) and normalize images for the model and [`CLIPTokenizer`] is used to encode the text. [`OwlViTProcessor`] wraps [`OwlViTImageProcessor`] and [`CLIPTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [`OwlViTProcessor`] and [`OwlViTForObjectDetection`]. ```python >>> import requests >>> from PIL import Image >>> import torch >>> from transformers import OwlViTProcessor, OwlViTForObjectDetection >>> processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") >>> model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> text_labels = [["a photo of a cat", "a photo of a dog"]] >>> inputs = processor(text=text_labels, images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> # Target image sizes (height, width) to rescale box predictions [batch_size, 2] >>> target_sizes = torch.tensor([(image.height, image.width)]) >>> # Convert outputs (bounding boxes and class logits) to Pascal VOC format (xmin, ymin, xmax, ymax) >>> results = processor.post_process_grounded_object_detection( ... outputs=outputs, target_sizes=target_sizes, threshold=0.1, text_labels=text_labels ... ) >>> # Retrieve predictions for the first image for the corresponding text queries >>> result = results[0] >>> boxes, scores, text_labels = result["boxes"], result["scores"], result["text_labels"] >>> for box, score, text_label in zip(boxes, scores, text_labels): ... box = [round(i, 2) for i in box.tolist()] ... print(f"Detected {text_label} with confidence {round(score.item(), 3)} at location {box}") Detected a photo of a cat with confidence 0.707 at location [324.97, 20.44, 640.58, 373.29] Detected a photo of a cat with confidence 0.717 at location [1.46, 55.26, 315.55, 472.17] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#usage-tips
#usage-tips
.md
318_2
A demo notebook on using OWL-ViT for zero- and one-shot (image-guided) object detection can be found [here](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#resources
#resources
.md
318_3
[`OwlViTConfig`] is the configuration class to store the configuration of an [`OwlViTModel`]. It is used to instantiate an OWL-ViT model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the OWL-ViT [google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: text_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`OwlViTTextConfig`]. vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`OwlViTVisionConfig`]. projection_dim (`int`, *optional*, defaults to 512): Dimensionality of text and vision projection layers. logit_scale_init_value (`float`, *optional*, defaults to 2.6592): The initial value of the *logit_scale* parameter. Default is used as per the original OWL-ViT implementation. return_dict (`bool`, *optional*, defaults to `True`): Whether or not the model should return a dictionary. If `False`, returns a tuple. kwargs (*optional*): Dictionary of keyword arguments. Methods: from_text_vision_configs
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitconfig
#owlvitconfig
.md
318_4
This is the configuration class to store the configuration of an [`OwlViTTextModel`]. It is used to instantiate an OwlViT text encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the OwlViT [google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 49408): Vocabulary size of the OWL-ViT text model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`OwlViTTextModel`]. hidden_size (`int`, *optional*, defaults to 512): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 2048): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer encoder. max_position_embeddings (`int`, *optional*, defaults to 16): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float`, *optional*, defaults to 1.0): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). pad_token_id (`int`, *optional*, defaults to 0): The id of the padding token in the input sequences. bos_token_id (`int`, *optional*, defaults to 49406): The id of the beginning-of-sequence token in the input sequences. eos_token_id (`int`, *optional*, defaults to 49407): The id of the end-of-sequence token in the input sequences. Example: ```python >>> from transformers import OwlViTTextConfig, OwlViTTextModel >>> # Initializing a OwlViTTextModel with google/owlvit-base-patch32 style configuration >>> configuration = OwlViTTextConfig() >>> # Initializing a OwlViTTextConfig from the google/owlvit-base-patch32 style configuration >>> model = OwlViTTextModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextconfig
#owlvittextconfig
.md
318_5
This is the configuration class to store the configuration of an [`OwlViTVisionModel`]. It is used to instantiate an OWL-ViT image encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the OWL-ViT [google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. num_channels (`int`, *optional*, defaults to 3): Number of channels in the input images. image_size (`int`, *optional*, defaults to 768): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 32): The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float`, *optional*, defaults to 1.0): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). Example: ```python >>> from transformers import OwlViTVisionConfig, OwlViTVisionModel >>> # Initializing a OwlViTVisionModel with google/owlvit-base-patch32 style configuration >>> configuration = OwlViTVisionConfig() >>> # Initializing a OwlViTVisionModel model from the google/owlvit-base-patch32 style configuration >>> model = OwlViTVisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionconfig
#owlvitvisionconfig
.md
318_6
Constructs an OWL-ViT image processor. This image processor inherits from [`ImageProcessingMixin`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the shorter edge of the input to a certain `size`. size (`Dict[str, int]`, *optional*, defaults to {"height": 768, "width": 768}): The size to use for resizing the image. Only has an effect if `do_resize` is set to `True`. If `size` is a sequence like (h, w), output size will be matched to this. If `size` is an int, then image will be resized to (size, size). resample (`int`, *optional*, defaults to `Resampling.BICUBIC`): An optional resampling filter. This can be one of `PIL.Image.Resampling.NEAREST`, `PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`, `PIL.Image.Resampling.BICUBIC` or `PIL.Image.Resampling.LANCZOS`. Only has an effect if `do_resize` is set to `True`. do_center_crop (`bool`, *optional*, defaults to `False`): Whether to crop the input at the center. If the input size is smaller than `crop_size` along any edge, the image is padded with 0's and then center cropped. crop_size (`int`, *optional*, defaults to {"height": 768, "width": 768}): The size to use for center cropping the image. Only has an effect if `do_center_crop` is set to `True`. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the input by a certain factor. rescale_factor (`float`, *optional*, defaults to `1/255`): The factor to use for rescaling the image. Only has an effect if `do_rescale` is set to `True`. do_normalize (`bool`, *optional*, defaults to `True`): Whether or not to normalize the input with `image_mean` and `image_std`. Desired output size when applying center-cropping. Only has an effect if `do_center_crop` is set to `True`. image_mean (`List[int]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`): The sequence of means for each channel, to be used when normalizing images. image_std (`List[int]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`): The sequence of standard deviations for each channel, to be used when normalizing images. Methods: preprocess - post_process_object_detection - post_process_image_guided_detection
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitimageprocessor
#owlvitimageprocessor
.md
318_7
OwlViTProcessor Constructs an OWL-ViT processor which wraps [`OwlViTImageProcessor`] and [`CLIPTokenizer`]/[`CLIPTokenizerFast`] into a single processor that interits both the image processor and tokenizer functionalities. See the [`~OwlViTProcessor.__call__`] and [`~OwlViTProcessor.decode`] for more information. Args: image_processor ([`OwlViTImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`CLIPTokenizer`, `CLIPTokenizerFast`], *optional*): The tokenizer is a required input. - __call__ - post_process_grounded_object_detection - post_process_image_guided_detection
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitprocessor
#owlvitprocessor
.md
318_8
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`OwlViTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - get_text_features - get_image_features
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitmodel
#owlvitmodel
.md
318_9
No docstring available for OwlViTTextModel Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvittextmodel
#owlvittextmodel
.md
318_10
No docstring available for OwlViTVisionModel Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitvisionmodel
#owlvitvisionmodel
.md
318_11
No docstring available for OwlViTForObjectDetection Methods: forward - image_guided_detection
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/owlvit.md
https://huggingface.co/docs/transformers/en/model_doc/owlvit/#owlvitforobjectdetection
#owlvitforobjectdetection
.md
318_12
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/
.md
319_0
The GraniteMoe model was proposed in [Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler](https://arxiv.org/abs/2408.13359) by Yikang Shen, Matthew Stallone, Mayank Mishra, Gaoyuan Zhang, Shawn Tan, Aditya Prasad, Adriana Meza Soria, David D. Cox and Rameswar Panda. PowerMoE-3B is a 3B sparse Mixture-of-Experts (sMoE) language model trained with the Power learning rate scheduler. It sparsely activates 800M parameters for each token. It is trained on a mix of open-source and proprietary datasets. PowerMoE-3B has shown promising results compared to other dense models with 2x activate parameters across various benchmarks, including natural language multi-choices, code generation, and math reasoning. The abstract from the paper is the following: *Finding the optimal learning rate for language model pretraining is a challenging task. This is not only because there is a complicated correlation between learning rate, batch size, number of training tokens, model size, and other hyperparameters but also because it is prohibitively expensive to perform a hyperparameter search for large language models with Billions or Trillions of parameters. Recent studies propose using small proxy models and small corpus to perform hyperparameter searches and transposing the optimal parameters to large models and large corpus. While the zero-shot transferability is theoretically and empirically proven for model size related hyperparameters, like depth and width, the zero-shot transfer from small corpus to large corpus is underexplored. In this paper, we study the correlation between optimal learning rate, batch size, and number of training tokens for the recently proposed WSD scheduler. After thousands of small experiments, we found a power-law relationship between variables and demonstrated its transferability across model sizes. Based on the observation, we propose a new learning rate scheduler, Power scheduler, that is agnostic about the number of training tokens and batch size. The experiment shows that combining the Power scheduler with Maximum Update Parameterization (\mup) can consistently achieve impressive performance with one set of hyperparameters regardless of the number of training tokens, batch size, model size, and even model architecture. Our 3B dense and MoE models trained with the Power scheduler achieve comparable performance as state-of-the-art small language models. We [open source](https://huggingface.co/collections/ibm/power-lm-66be64ae647ddf11b9808000) these pretrained models.* Tips: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "ibm/PowerMoE-3b" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto") model.eval() # change input text as desired prompt = "Write a code to find the maximum value in a list of numbers." # tokenize the text input_tokens = tokenizer(prompt, return_tensors="pt") # generate output tokens output = model.generate(**input_tokens, max_new_tokens=100) # decode output tokens into text output = tokenizer.batch_decode(output) # loop over the batch to print, in this example the batch size is 1 for i in output: print(i) ``` This model was contributed by [mayank-mishra](https://huggingface.co/mayank-mishra).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#overview
#overview
.md
319_1
This is the configuration class to store the configuration of a [`GraniteMoeModel`]. It is used to instantiate an GraniteMoe model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GraniteMoe-3B. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the GraniteMoe model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`GraniteMoeModel`] hidden_size (`int`, *optional*, defaults to 4096): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 11008): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 2048): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*): Padding token id. bos_token_id (`int`, *optional*, defaults to 1): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 2): End of stream token id. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update `max_position_embeddings` to the expected new maximum. See the following thread for more information on how these scaling strategies behave: https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an experimental feature, subject to breaking API changes in future versions. attention_bias (`bool`, *optional*, defaults to `False`): Whether to use a bias in the query, key, value and output projection layers during self-attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. embedding_multiplier (`float`, *optional*, defaults to 1.0): embedding multiplier logits_scaling (`float`, *optional*, defaults to 1.0): divisor for output logits residual_multiplier (`float`, *optional*, defaults to 1.0): residual multiplier attention_multiplier (`float`, *optional*, defaults to 1.0): attention multiplier num_local_experts (`int`, *optional*, defaults to 8): total number of experts num_experts_per_tok (`int`, *optional*, defaults to 2): number of experts per token output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not the router logits should be returned by the model. Enabeling this will also allow the model to output the auxiliary loss. router_aux_loss_coef (`float`, *optional*, defaults to 0.001): router auxialiary loss coefficient ```python >>> from transformers import GraniteMoeModel, GraniteMoeConfig >>> # Initializing a GraniteMoe granitemoe-3b style configuration >>> configuration = GraniteMoeConfig() >>> # Initializing a model from the granitemoe-7b style configuration >>> model = GraniteMoeModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeconfig
#granitemoeconfig
.md
319_2
The bare GraniteMoe Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GraniteMoeConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`GraniteMoeDecoderLayer`] Args: config: GraniteMoeConfig Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoemodel
#granitemoemodel
.md
319_3
No docstring available for GraniteMoeForCausalLM Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/granitemoe.md
https://huggingface.co/docs/transformers/en/model_doc/granitemoe/#granitemoeforcausallm
#granitemoeforcausallm
.md
319_4
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/
.md
320_0
The RWKV model was proposed in [this repo](https://github.com/BlinkDL/RWKV-LM) It suggests a tweak in the traditional Transformer attention to make it linear. This way, the model can be used as recurrent network: passing inputs for timestamp 0 and timestamp 1 together is the same as passing inputs at timestamp 0, then inputs at timestamp 1 along with the state of timestamp 0 (see example below). This can be more efficient than a regular Transformer and can deal with sentence of any length (even if the model uses a fixed context length for training). This model was contributed by [sgugger](https://huggingface.co/sgugger). The original code can be found [here](https://github.com/BlinkDL/RWKV-LM).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#overview
#overview
.md
320_1
```py import torch from transformers import AutoTokenizer, RwkvConfig, RwkvModel model = RwkvModel.from_pretrained("sgugger/rwkv-430M-pile") tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-430M-pile") inputs = tokenizer("This is an example.", return_tensors="pt") # Feed everything to the model outputs = model(inputs["input_ids"]) output_whole = outputs.last_hidden_state outputs = model(inputs["input_ids"][:, :2]) output_one = outputs.last_hidden_state # Using the state computed on the first inputs, we will get the same output outputs = model(inputs["input_ids"][:, 2:], state=outputs.state) output_two = outputs.last_hidden_state torch.allclose(torch.cat([output_one, output_two], dim=1), output_whole, atol=1e-5) ``` If you want to make sure the model stops generating when `'\n\n'` is detected, we recommend using the following stopping criteria: ```python from transformers import StoppingCriteria class RwkvStoppingCriteria(StoppingCriteria): def __init__(self, eos_sequence = [187,187], eos_token_id = 537): self.eos_sequence = eos_sequence self.eos_token_id = eos_token_id def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: last_2_ids = input_ids[:,-2:].tolist() return self.eos_sequence in last_2_ids output = model.generate(inputs["input_ids"], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()]) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#usage-example
#usage-example
.md
320_2
This is the configuration class to store the configuration of a [`RwkvModel`]. It is used to instantiate a RWKV model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RWVK-4 [RWKV/rwkv-4-169m-pile](https://huggingface.co/RWKV/rwkv-4-169m-pile) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50277): Vocabulary size of the RWKV model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`RwkvModel`]. context_length (`int`, *optional*, defaults to 1024): The maximum sequence length that this model can be used with in a single forward (using it in RNN mode lets use any sequence length). hidden_size (`int`, *optional*, defaults to 4096): Dimensionality of the embeddings and hidden states. num_hidden_layers (`int`, *optional*, defaults to 32): Number of hidden layers in the model. attention_hidden_size (`int`, *optional*): Dimensionality of the attention hidden states. Will default to `hidden_size` if unset. intermediate_size (`int`, *optional*): Dimensionality of the inner feed-forward layers. Will default to 4 times `hidden_size` if unset. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon to use in the layer normalization layers. bos_token_id (`int`, *optional*, defaults to 0): The id of the beginning of sentence token in the vocabulary. Defaults to 0 as RWKV uses the same tokenizer as GPTNeoX. eos_token_id (`int`, *optional*, defaults to 0): The id of the end of sentence token in the vocabulary. Defaults to 0 as RWKV uses the same tokenizer as GPTNeoX. rescale_every (`int`, *optional*, defaults to 6): At inference, the hidden states (and weights of the correponding output layers) are divided by 2 every `rescale_every` layer. If set to 0 or a negative number, no rescale is done. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether or not to tie the word embeddings with the input token embeddings. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last state. Example: ```python >>> from transformers import RwkvConfig, RwkvModel >>> # Initializing a Rwkv configuration >>> configuration = RwkvConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = RwkvModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvconfig
#rwkvconfig
.md
320_3
The bare RWKV Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RwkvConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvmodel
#rwkvmodel
.md
320_4
The RWKV Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`RwkvConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkvlmheadmodel
#rwkvlmheadmodel
.md
320_5
In a traditional auto-regressive Transformer, attention is written as $$O = \hbox{softmax}(QK^{T} / \sqrt{d}) V$$ with \\(Q\\), \\(K\\) and \\(V\\) are matrices of shape `seq_len x hidden_size` named query, key and value (they are actually bigger matrices with a batch dimension and an attention head dimension but we're only interested in the last two, which is where the matrix product is taken, so for the sake of simplicity we only consider those two). The product \\(QK^{T}\\) then has shape `seq_len x seq_len` and we can take the matrix product with \\(V\\) to get the output \\(O\\) of the same shape as the others. Replacing the softmax by its value gives: $$O_{i} = \frac{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}} V_{j}}{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}}}$$ Note that the entries in \\(QK^{T}\\) corresponding to \\(j > i\\) are masked (the sum stops at j) because the attention is not allowed to look at future tokens (only past ones). In comparison, the RWKV attention is given by $$O_{i} = \sigma(R_{i}) \frac{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}} V_{j}}{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}}}$$ where \\(R\\) is a new matrix called receptance by the author, \\(K\\) and \\(V\\) are still the key and value (\\(\sigma\\) here is the sigmoid function). \\(W\\) is a new vector that represents the position of the token and is given by $$W_{0} = u \hbox{ and } W_{k} = (k-1)w \hbox{ for } k \geq 1$$ with \\(u\\) and \\(w\\) learnable parameters called in the code `time_first` and `time_decay` respectively. The numerator and denominator can both be expressed recursively. Naming them \\(N_{i}\\) and \\(D_{i}\\) we have: $$N_{i} = e^{u + K_{i}} V_{i} + \hat{N}_{i} \hbox{ where } \hat{N}_{i} = e^{K_{i-1}} V_{i-1} + e^{w + K_{i-2}} V_{i-2} \cdots + e^{(i-2)w + K_{1}} V_{1}$$ so \\(\hat{N}_{i}\\) (called `numerator_state` in the code) satisfies $$\hat{N}_{0} = 0 \hbox{ and } \hat{N}_{j+1} = e^{K_{j}} V_{j} + e^{w} \hat{N}_{j}$$ and $$D_{i} = e^{u + K_{i}} + \hat{D}_{i} \hbox{ where } \hat{D}_{i} = e^{K_{i-1}} + e^{w + K_{i-2}} \cdots + e^{(i-2)w + K_{1}}$$ so \\(\hat{D}_{i}\\) (called `denominator_state` in the code) satisfies $$\hat{D}_{0} = 0 \hbox{ and } \hat{D}_{j+1} = e^{K_{j}} + e^{w} \hat{D}_{j}$$ The actual recurrent formula used are a tiny bit more complex, as for numerical stability we don't want to compute exponentials of big numbers. Usually the softmax is not computed as is, but the exponential of the maximum term is divided of the numerator and denominator: $$\frac{e^{x_{i}}}{\sum_{j=1}^{n} e^{x_{j}}} = \frac{e^{x_{i} - M}}{\sum_{j=1}^{n} e^{x_{j} - M}}$$ with \\(M\\) the maximum of all \\(x_{j}\\). So here on top of saving the numerator state (\\(\hat{N}\\)) and the denominator state (\\(\hat{D}\\)) we also keep track of the maximum of all terms encountered in the exponentials. So we actually use $$\tilde{N}_{i} = e^{-M_{i}} \hat{N}_{i} \hbox{ and } \tilde{D}_{i} = e^{-M_{i}} \hat{D}_{i}$$ defined by the following recurrent formulas: $$\tilde{N}_{0} = 0 \hbox{ and } \tilde{N}_{j+1} = e^{K_{j} - q} V_{j} + e^{w + M_{j} - q} \tilde{N}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})$$ and $$\tilde{D}_{0} = 0 \hbox{ and } \tilde{D}_{j+1} = e^{K_{j} - q} + e^{w + M_{j} - q} \tilde{D}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})$$ and \\(M_{j+1} = q\\). With those, we can then compute $$N_{i} = e^{u + K_{i} - q} V_{i} + e^{M_{i}} \tilde{N}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})$$ and $$D_{i} = e^{u + K_{i} - q} + e^{M_{i}} \tilde{D}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})$$ which finally gives us $$O_{i} = \sigma(R_{i}) \frac{N_{i}}{D_{i}}$$
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rwkv.md
https://huggingface.co/docs/transformers/en/model_doc/rwkv/#rwkv-attention-and-the-recurrent-formulas
#rwkv-attention-and-the-recurrent-formulas
.md
320_6
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/
.md
321_0
DBRX is a [transformer-based](https://www.isattentionallyouneed.com/) decoder-only large language model (LLM) that was trained using next-token prediction. It uses a *fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2. This provides 65x more possible combinations of experts and we found that this improves model quality. DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA). It is a BPE based model and uses the GPT-4 tokenizer as described in the [tiktoken](https://github.com/openai/tiktoken) repository. We made these choices based on exhaustive evaluation and scaling experiments. DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens. We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models. This new dataset was developed using the full suite of Databricks tools, including Apache Spark™ and Databricks notebooks for data processing, and Unity Catalog for data management and governance. We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality. More detailed information about DBRX Instruct and DBRX Base can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm). This model was contributed by [eitan-turok](https://huggingface.co/eitanturok) and [abhi-db](https://huggingface.co/abhi-db). The original code can be found [here](https://github.com/databricks/dbrx-instruct), though this may not be up to date.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#overview
#overview
.md
321_1
The `generate()` method can be used to generate text using DBRX. You can generate using the standard attention implementation, flash-attention, and the PyTorch scaled dot product attention. The last two attention implementations give speed ups. ```python from transformers import DbrxForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", token="YOUR_HF_TOKEN") model = DbrxForCausalLM.from_pretrained( "databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, token="YOUR_HF_TOKEN", ) input_text = "What does it take to build a great LLM?" messages = [{"role": "user", "content": input_text}] input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=200) print(tokenizer.decode(outputs[0])) ``` If you have flash-attention installed (`pip install flash-attn`), it is possible to generate faster. (The HuggingFace documentation for flash-attention can be found [here](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2).) ```python from transformers import DbrxForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", token="YOUR_HF_TOKEN") model = DbrxForCausalLM.from_pretrained( "databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, token="YOUR_HF_TOKEN", attn_implementation="flash_attention_2", ) input_text = "What does it take to build a great LLM?" messages = [{"role": "user", "content": input_text}] input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=200) print(tokenizer.decode(outputs[0])) ``` You can also generate faster using the PyTorch scaled dot product attention. (The HuggingFace documentation for scaled dot product attention can be found [here](https://huggingface.co/docs/transformers/perf_infer_gpu_one#pytorch-scaled-dot-product-attention).) ```python from transformers import DbrxForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", token="YOUR_HF_TOKEN") model = DbrxForCausalLM.from_pretrained( "databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, token="YOUR_HF_TOKEN", attn_implementation="sdpa", ) input_text = "What does it take to build a great LLM?" messages = [{"role": "user", "content": input_text}] input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=200) print(tokenizer.decode(outputs[0])) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#usage-examples
#usage-examples
.md
321_2
This is the configuration class to store the configuration of a [`DbrxModel`]. It is used to instantiate a Dbrx model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a different configuration to that of the [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: d_model (`int`, *optional*, defaults to 2048): Dimensionality of the embeddings and hidden states. n_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. n_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. max_seq_len (`int`, *optional*, defaults to 2048): The maximum sequence length of the model. vocab_size (`int`, *optional*, defaults to 32000): Vocabulary size of the Dbrx model. Defines the maximum number of different tokens that can be represented by the `inputs_ids` passed when calling [`DbrxModel`]. resid_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability applied to the attention output before combining with residual. emb_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability for the embedding layer. attn_config (`dict`, *optional*): A dictionary used to configure the model's attention module. ffn_config (`dict`, *optional*): A dictionary used to configure the model's FFN module. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. output_router_logits (`bool`, *optional*, defaults to `False`): Whether or not the router logits should be returned by the model. Enabling this will also allow the model to output the auxiliary loss. See [here]() for more details. Example: ```python >>> from transformers import DbrxConfig, DbrxModel >>> # Initializing a Dbrx configuration >>> configuration = DbrxConfig(n_layers=2, d_model=256, n_heads=8, vocab_size=128) >>> # Initializing a model (with random weights) from the configuration >>> model = DbrxModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxconfig
#dbrxconfig
.md
321_3
The bare DBRX Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DbrxConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers*. Each layer is a [`DbrxBlock`] layer. Args: config ([`DbrxConfig`]): Model configuration class with all parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxmodel
#dbrxmodel
.md
321_4
The DBRX Model transformer for causal language modeling. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DbrxConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dbrx.md
https://huggingface.co/docs/transformers/en/model_doc/dbrx/#dbrxforcausallm
#dbrxforcausallm
.md
321_5
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/
.md
322_0
The ByT5 model was presented in [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. The abstract from the paper is the following: *Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/google-research/byt5). <Tip> ByT5's architecture is based on the T5v1.1 model, refer to [T5v1.1's documentation page](t5v1.1) for the API reference. They only differ in how inputs should be prepared for the model, see the code examples below. </Tip> Since ByT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#overview
#overview
.md
322_1
ByT5 works on raw UTF-8 bytes, so it can be used without a tokenizer: ```python >>> from transformers import T5ForConditionalGeneration >>> import torch >>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> num_special_tokens = 3 >>> # Model has 3 special tokens which take up the input ids 0,1,2 of ByT5. >>> # => Need to shift utf-8 character encodings by 3 before passing ids to model. >>> input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens >>> labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens >>> loss = model(input_ids, labels=labels).loss >>> loss.item() 2.66 ``` For batched inference and training it is however recommended to make use of the tokenizer: ```python >>> from transformers import T5ForConditionalGeneration, AutoTokenizer >>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-small") >>> model_inputs = tokenizer( ... ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt" ... ) >>> labels_dict = tokenizer( ... ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt" ... ) >>> labels = labels_dict.input_ids >>> loss = model(**model_inputs, labels=labels).loss >>> loss.item() 17.9 ``` Similar to [T5](t5), ByT5 was trained on the span-mask denoising task. However, since the model works directly on characters, the pretraining task is a bit different. Let's corrupt some characters of the input sentence `"The dog chases a ball in the park."` and ask ByT5 to predict them for us. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base") >>> input_ids_prompt = "The dog chases a ball in the park." >>> input_ids = tokenizer(input_ids_prompt).input_ids >>> # Note that we cannot add "{extra_id_...}" to the string directly >>> # as the Byte tokenizer would incorrectly merge the tokens >>> # For ByT5, we need to work directly on the character level >>> # Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead >>> # uses final utf character ids. >>> # UTF-8 is represented by 8 bits and ByT5 has 3 special tokens. >>> # => There are 2**8+2 = 259 input ids and mask tokens count down from index 258. >>> # => mask to "The dog [258]a ball [257]park." >>> input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]]) >>> input_ids tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]]) >>> # ByT5 produces only one char at a time so we need to produce many more output characters here -> set `max_length=100`. >>> output_ids = model.generate(input_ids, max_length=100)[0].tolist() >>> output_ids [0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49] >>> # ^- Note how 258 descends to 257, 256, 255 >>> # Now we need to split on the sentinel tokens, let's write a short loop for this >>> output_ids_list = [] >>> start_token = 0 >>> sentinel_token = 258 >>> while sentinel_token in output_ids: ... split_idx = output_ids.index(sentinel_token) ... output_ids_list.append(output_ids[start_token:split_idx]) ... start_token = split_idx ... sentinel_token -= 1 >>> output_ids_list.append(output_ids[start_token:]) >>> output_string = tokenizer.batch_decode(output_ids_list) >>> output_string ['<pad>', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.'] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#usage-example
#usage-example
.md
322_2
Construct a ByT5 tokenizer. ByT5 simply uses raw bytes utf-8 encoding. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. extra_ids (`int`, *optional*, defaults to 125): Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are indexed from the end of the vocabulary up to beginning ("<extra_id_0>" is the last token in the vocabulary like in ByT5 preprocessing see [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117)). additional_special_tokens (`List[str]`, *optional*): Additional special tokens used by the tokenizer. See [`ByT5Tokenizer`] for all details.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/byt5.md
https://huggingface.co/docs/transformers/en/model_doc/byt5/#byt5tokenizer
#byt5tokenizer
.md
322_3
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/
.md
323_0
The [Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/) model is a major update to [Qwen-VL](https://arxiv.org/pdf/2308.12966) from the Qwen team at Alibaba Research. The abstract from the blog is the following: *This blog introduces Qwen2-VL, an advanced version of the Qwen-VL model that has undergone significant enhancements over the past year. Key improvements include enhanced image comprehension, advanced video understanding, integrated visual agent functionality, and expanded multilingual support. The model architecture has been optimized for handling arbitrary image resolutions through Naive Dynamic Resolution support and utilizes Multimodal Rotary Position Embedding (M-ROPE) to effectively process both 1D textual and multi-dimensional visual data. This updated model demonstrates competitive performance against leading AI systems like GPT-4o and Claude 3.5 Sonnet in vision-related tasks and ranks highly among open-source models in text capabilities. These advancements make Qwen2-VL a versatile tool for various applications requiring robust multimodal processing and reasoning abilities.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/qwen2_vl_architecture.jpeg" alt="drawing" width="600"/> <small> Qwen2-VL architecture. Taken from the <a href="https://qwenlm.github.io/blog/qwen2-vl/">blog post.</a> </small> This model was contributed by [simonJJJ](https://huggingface.co/simonJJJ).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#overview
#overview
.md
323_1
The model can accept both images and videos as input. Here's an example code for inference. ```python from PIL import Image import requests import torch from torchvision import io from typing import Dict from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor # Load the model in half-precision on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", device_map="auto") processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # Image url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" image = Image.open(requests.get(url, stream=True).raw) conversation = [ { "role":"user", "content":[ { "type":"image", }, { "type":"text", "text":"Describe this image." } ] } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n' inputs = processor(text=[text_prompt], images=[image], padding=True, return_tensors="pt") inputs = inputs.to('cuda') # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(output_text) # Video def fetch_video(ele: Dict, nframe_factor=2): if isinstance(ele['video'], str): def round_by_factor(number: int, factor: int) -> int: return round(number / factor) * factor video = ele["video"] if video.startswith("file://"): video = video[7:] video, _, info = io.read_video( video, start_pts=ele.get("video_start", 0.0), end_pts=ele.get("video_end", None), pts_unit="sec", output_format="TCHW", ) assert not ("fps" in ele and "nframes" in ele), "Only accept either `fps` or `nframes`" if "nframes" in ele: nframes = round_by_factor(ele["nframes"], nframe_factor) else: fps = ele.get("fps", 1.0) nframes = round_by_factor(video.size(0) / info["video_fps"] * fps, nframe_factor) idx = torch.linspace(0, video.size(0) - 1, nframes, dtype=torch.int64) return video[idx] video_info = {"type": "video", "video": "/path/to/video.mp4", "fps": 1.0} video = fetch_video(video_info) conversation = [ { "role": "user", "content": [ {"type": "video"}, {"type": "text", "text": "What happened in the video?"}, ], } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|video_pad|><|vision_end|>What happened in the video?<|im_end|>\n<|im_start|>assistant\n' inputs = processor(text=[text_prompt], videos=[video], padding=True, return_tensors="pt") inputs = inputs.to('cuda') # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(output_text) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#single-media-inference
#single-media-inference
.md
323_2
The model can batch inputs composed of mixed samples of various types such as images, videos, and text. Here is an example. ```python image1 = Image.open("/path/to/image1.jpg") image2 = Image.open("/path/to/image2.jpg") image3 = Image.open("/path/to/image3.jpg") image4 = Image.open("/path/to/image4.jpg") image5 = Image.open("/path/to/image5.jpg") video = fetch_video({ "type": "video", "video": "/path/to/video.mp4", "fps": 1.0 }) # Conversation for the first image conversation1 = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "Describe this image."} ] } ] # Conversation with two images conversation2 = [ { "role": "user", "content": [ {"type": "image"}, {"type": "image"}, {"type": "text", "text": "What is written in the pictures?"} ] } ] # Conversation with pure text conversation3 = [ { "role": "user", "content": "who are you?" } ] # Conversation with mixed midia conversation4 = [ { "role": "user", "content": [ {"type": "image"}, {"type": "image"}, {"type": "video"}, {"type": "text", "text": "What are the common elements in these medias?"}, ], } ] conversations = [conversation1, conversation2, conversation3, conversation4] # Preparation for batch inference texts = [processor.apply_chat_template(msg, add_generation_prompt=True) for msg in conversations] inputs = processor( text=texts, images=[image1, image2, image3, image4, image5], videos=[video], padding=True, return_tensors="pt", ) inputs = inputs.to('cuda') # Batch Inference output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(output_text) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#batch-mixed-media-inference
#batch-mixed-media-inference
.md
323_3
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs. ```python min_pixels = 224*224 max_pixels = 2048*2048 processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) ``` In case of limited GPU RAM, one can reduce the resolution as follows: ```python min_pixels = 256*28*28 max_pixels = 1024*28*28 processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) ``` This ensures each image gets encoded using a number between 256-1024 tokens. The 28 comes from the fact that the model uses a patch size of 14 and a temporal patch size of 2 (14 x 2 = 28).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#image-resolution-trade-off
#image-resolution-trade-off
.md
323_4
By default, images and video content are directly included in the conversation. When handling multiple images, it's helpful to add labels to the images and videos for better reference. Users can control this behavior with the following settings: ```python conversation = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "Hello, how are you?"} ] }, { "role": "assistant", "content": "I'm doing well, thank you for asking. How can I assist you today?" }, { "role": "user", "content": [ {"type": "text", "text": "Can you describe these images and video?"}, {"type": "image"}, {"type": "image"}, {"type": "video"}, {"type": "text", "text": "These are from my vacation."} ] }, { "role": "assistant", "content": "I'd be happy to describe the images and video for you. Could you please provide more context about your vacation?" }, { "role": "user", "content": "It was a trip to the mountains. Can you see the details in the images and video?" } ] # default: prompt_without_id = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?<|vision_start|><|image_pad|><|vision_end|><|vision_start|><|image_pad|><|vision_end|><|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n' # add ids prompt_with_id = processor.apply_chat_template(conversation, add_generation_prompt=True, add_vision_id=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nPicture 1: <|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?Picture 2: <|vision_start|><|image_pad|><|vision_end|>Picture 3: <|vision_start|><|image_pad|><|vision_end|>Video 1: <|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n' ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#multiple-image-inputs
#multiple-image-inputs
.md
323_5
First, make sure to install the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ``` Also, you should have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`. To load and run a model using Flash Attention-2, simply add `attn_implementation="flash_attention_2"` when loading the model as follows: ```python from transformers import Qwen2VLForConditionalGeneration model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#flash-attention-2-to-speed-up-generation
#flash-attention-2-to-speed-up-generation
.md
323_6
This is the configuration class to store the configuration of a [`Qwen2VLModel`]. It is used to instantiate a Qwen2-VL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of Qwen2-VL-7B-Instruct [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 152064): Vocabulary size of the Qwen2VL model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Qwen2VLModel`] hidden_size (`int`, *optional*, defaults to 8192): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 29568): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 80): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 64): Number of attention heads for each attention layer in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 8): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `32`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 32768): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to 1000000.0): The base period of the RoPE embeddings. use_sliding_window (`bool`, *optional*, defaults to `False`): Whether to use sliding window attention. sliding_window (`int`, *optional*, defaults to 4096): Sliding window attention (SWA) window size. If not specified, will default to `4096`. max_window_layers (`int`, *optional*, defaults to 80): The number of layers that use SWA (Sliding Window Attention). The bottom layers use SWA while the top use full attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. vision_config (`Dict`, *optional*): The config for the visual encoder initialization. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value accordingly. Expected contents: `rope_type` (`str`): The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope', 'llama3'], with 'default' being the original RoPE implementation. `factor` (`float`, *optional*): Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In most scaling types, a `factor` of x will enable the model to handle sequences of length x * original maximum pre-trained length. `original_max_position_embeddings` (`int`, *optional*): Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during pretraining. `attention_factor` (`float`, *optional*): Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using the `factor` field to infer the suggested value. `beta_fast` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32. `beta_slow` (`float`, *optional*): Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1. `short_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to short contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `long_factor` (`List[float]`, *optional*): Only used with 'longrope'. The scaling factor to be applied to long contexts (< `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2 `low_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE `high_freq_factor` (`float`, *optional*): Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE ```python >>> from transformers import Qwen2VLForConditionalGeneration, Qwen2VLConfig >>> # Initializing a Qwen2VL style configuration >>> configuration = Qwen2VLConfig() >>> # Initializing a model from the Qwen2-VL-7B style configuration >>> model = Qwen2VLForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlconfig
#qwen2vlconfig
.md
323_7
Constructs a Qwen2-VL image processor that dynamically resizes images based on the original images. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): Resampling filter to use when resizing the image. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`): Mean to use if normalizing the image. This is a float or list of floats for each channel in the image. image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`): Standard deviation to use if normalizing the image. This is a float or list of floats for each channel in the image. do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. min_pixels (`int`, *optional*, defaults to `56 * 56`): The min pixels of the image to resize the image. max_pixels (`int`, *optional*, defaults to `28 * 28 * 1280`): The max pixels of the image to resize the image. patch_size (`int`, *optional*, defaults to 14): The spacial patch size of the vision encoder. temporal_patch_size (`int`, *optional*, defaults to 2): The temporal patch size of the vision encoder. merge_size (`int`, *optional*, defaults to 2): The merge size of the vision encoder to llm encoder. Methods: preprocess
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlimageprocessor
#qwen2vlimageprocessor
.md
323_8
Constructs a Qwen2-VL processor which wraps a Qwen2-VL image processor and a Qwen2 tokenizer into a single processor. [`Qwen2VLProcessor`] offers all the functionalities of [`Qwen2VLImageProcessor`] and [`Qwen2TokenizerFast`]. See the [`~Qwen2VLProcessor.__call__`] and [`~Qwen2VLProcessor.decode`] for more information. Args: image_processor ([`Qwen2VLImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`Qwen2TokenizerFast`], *optional*): The tokenizer is a required input. chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlprocessor
#qwen2vlprocessor
.md
323_9
The bare Qwen2VL Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Qwen2VLConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlmodel
#qwen2vlmodel
.md
323_10
No docstring available for Qwen2VLForConditionalGeneration Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2_vl.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2_vl/#qwen2vlforconditionalgeneration
#qwen2vlforconditionalgeneration
.md
323_11
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the MIT License; you may not use this file except in compliance with the License. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/
.md
324_0
The SuperPoint model was proposed in [SuperPoint: Self-Supervised Interest Point Detection and Description](https://arxiv.org/abs/1712.07629) by Daniel DeTone, Tomasz Malisiewicz and Andrew Rabinovich. This model is the result of a self-supervised training of a fully-convolutional network for interest point detection and description. The model is able to detect interest points that are repeatable under homographic transformations and provide a descriptor for each point. The use of the model in its own is limited, but it can be used as a feature extractor for other tasks such as homography estimation, image matching, etc. The abstract from the paper is the following: *This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/superpoint_architecture.png" alt="drawing" width="500"/> <small> SuperPoint overview. Taken from the <a href="https://arxiv.org/abs/1712.07629v4">original paper.</a> </small>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#overview
#overview
.md
324_1
Here is a quick example of using the model to detect interest points in an image: ```python from transformers import AutoImageProcessor, SuperPointForKeypointDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint") inputs = processor(image, return_tensors="pt") outputs = model(**inputs) ``` The outputs contain the list of keypoint coordinates with their respective score and description (a 256-long vector). You can also feed multiple images to the model. Due to the nature of SuperPoint, to output a dynamic number of keypoints, you will need to use the mask attribute to retrieve the respective information : ```python from transformers import AutoImageProcessor, SuperPointForKeypointDetection import torch from PIL import Image import requests url_image_1 = "http://images.cocodataset.org/val2017/000000039769.jpg" image_1 = Image.open(requests.get(url_image_1, stream=True).raw) url_image_2 = "http://images.cocodataset.org/test-stuff2017/000000000568.jpg" image_2 = Image.open(requests.get(url_image_2, stream=True).raw) images = [image_1, image_2] processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint") inputs = processor(images, return_tensors="pt") outputs = model(**inputs) image_sizes = [(image.height, image.width) for image in images] outputs = processor.post_process_keypoint_detection(outputs, image_sizes) for output in outputs: for keypoints, scores, descriptors in zip(output["keypoints"], output["scores"], output["descriptors"]): print(f"Keypoints: {keypoints}") print(f"Scores: {scores}") print(f"Descriptors: {descriptors}") ``` You can then print the keypoints on the image of your choice to visualize the result: ```python import matplotlib.pyplot as plt plt.axis("off") plt.imshow(image_1) plt.scatter( outputs[0]["keypoints"][:, 0], outputs[0]["keypoints"][:, 1], c=outputs[0]["scores"] * 100, s=outputs[0]["scores"] * 50, alpha=0.8 ) plt.savefig(f"output_image.png") ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/ZtFmphEhx8tcbEQqOolyE.png) This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille). The original code can be found [here](https://github.com/magicleap/SuperPointPretrainedNetwork).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#usage-tips
#usage-tips
.md
324_2
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SuperPoint. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A notebook showcasing inference and visualization with SuperPoint can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SuperPoint/Inference_with_SuperPoint_to_detect_interest_points_in_an_image.ipynb). 🌎
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#resources
#resources
.md
324_3
This is the configuration class to store the configuration of a [`SuperPointForKeypointDetection`]. It is used to instantiate a SuperPoint model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SuperPoint [magic-leap-community/superpoint](https://huggingface.co/magic-leap-community/superpoint) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: encoder_hidden_sizes (`List`, *optional*, defaults to `[64, 64, 128, 128]`): The number of channels in each convolutional layer in the encoder. decoder_hidden_size (`int`, *optional*, defaults to 256): The hidden size of the decoder. keypoint_decoder_dim (`int`, *optional*, defaults to 65): The output dimension of the keypoint decoder. descriptor_decoder_dim (`int`, *optional*, defaults to 256): The output dimension of the descriptor decoder. keypoint_threshold (`float`, *optional*, defaults to 0.005): The threshold to use for extracting keypoints. max_keypoints (`int`, *optional*, defaults to -1): The maximum number of keypoints to extract. If `-1`, will extract all keypoints. nms_radius (`int`, *optional*, defaults to 4): The radius for non-maximum suppression. border_removal_distance (`int`, *optional*, defaults to 4): The distance from the border to remove keypoints. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Example: ```python >>> from transformers import SuperPointConfig, SuperPointForKeypointDetection >>> # Initializing a SuperPoint superpoint style configuration >>> configuration = SuperPointConfig() >>> # Initializing a model from the superpoint style configuration >>> model = SuperPointForKeypointDetection(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointconfig
#superpointconfig
.md
324_4
Constructs a SuperPoint image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Controls whether to resize the image's (height, width) dimensions to the specified `size`. Can be overriden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"height": 480, "width": 640}`): Resolution of the output image after `resize` is applied. Only has an effect if `do_resize` is set to `True`. Can be overriden by `size` in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overriden by `do_rescale` in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overriden by `rescale_factor` in the `preprocess` method. Methods: preprocess - post_process_keypoint_detection
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointimageprocessor
#superpointimageprocessor
.md
324_5
SuperPoint model outputting keypoints and descriptors. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SuperPointConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. SuperPoint model. It consists of a SuperPointEncoder, a SuperPointInterestPointDecoder and a SuperPointDescriptorDecoder. SuperPoint was proposed in `SuperPoint: Self-Supervised Interest Point Detection and Description <https://arxiv.org/abs/1712.07629>`__ by Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. It is a fully convolutional neural network that extracts keypoints and descriptors from an image. It is trained in a self-supervised manner, using a combination of a photometric loss and a loss based on the homographic adaptation of keypoints. It is made of a convolutional encoder and two decoders: one for keypoints and one for descriptors. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/superpoint.md
https://huggingface.co/docs/transformers/en/model_doc/superpoint/#superpointforkeypointdetection
#superpointforkeypointdetection
.md
324_6
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-t5.md
https://huggingface.co/docs/transformers/en/model_doc/flan-t5/
.md
325_0
FLAN-T5 was released in the paper [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) - it is an enhanced version of T5 that has been finetuned in a mixture of tasks. One can directly use FLAN-T5 weights without finetuning the model: ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") >>> inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Pour a cup of bolognese into a large bowl and add the pasta'] ``` FLAN-T5 includes the same improvements as T5 version 1.1 (see [here](https://huggingface.co/docs/transformers/model_doc/t5v1.1) for the full details of the model's improvements.) Google has released the following variants: - [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) - [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) - [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) - [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) - [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl). The original checkpoints can be found [here](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints). <Tip> Refer to [T5's documentation page](t5) for all API reference, code examples and notebooks. For more details regarding training and evaluation of the FLAN-T5, refer to the model card. </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flan-t5.md
https://huggingface.co/docs/transformers/en/model_doc/flan-t5/#overview
#overview
.md
325_1
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/
.md
326_0
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=ctrl"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-ctrl-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/tiny-ctrl"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrl
#ctrl
.md
326_1
CTRL model was proposed in [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The abstract from the paper is the following: *Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution.* This model was contributed by [keskarnitishr](https://huggingface.co/keskarnitishr). The original code can be found [here](https://github.com/salesforce/ctrl).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#overview
#overview
.md
326_2
- CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences or links to generate coherent text. Refer to the [original implementation](https://github.com/salesforce/ctrl) for more information. - CTRL is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be observed in the *run_generation.py* example script. - The PyTorch models can take the `past_key_values` as input, which is the previously computed key/value attention pairs. TensorFlow models accepts `past` as input. Using the `past_key_values` value prevents the model from re-computing pre-computed values in the context of text generation. See the [`forward`](model_doc/ctrl#transformers.CTRLModel.forward) method for more information on the usage of this argument.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#usage-tips
#usage-tips
.md
326_3
- [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#resources
#resources
.md
326_4
This is the configuration class to store the configuration of a [`CTRLModel`] or a [`TFCTRLModel`]. It is used to instantiate a CTRL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [Salesforce/ctrl](https://huggingface.co/Salesforce/ctrl) architecture from SalesForce. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 246534): Vocabulary size of the CTRL model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`CTRLModel`] or [`TFCTRLModel`]. n_positions (`int`, *optional*, defaults to 256): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_embd (`int`, *optional*, defaults to 1280): Dimensionality of the embeddings and hidden states. dff (`int`, *optional*, defaults to 8192): Dimensionality of the inner dimension of the feed forward networks (FFN). n_layer (`int`, *optional*, defaults to 48): Number of hidden layers in the Transformer encoder. n_head (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. resid_pdrop (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (`int`, *optional*, defaults to 0.1): The dropout ratio for the embeddings. layer_norm_epsilon (`float`, *optional*, defaults to 1e-06): The epsilon to use in the layer normalization layers initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Examples: ```python >>> from transformers import CTRLConfig, CTRLModel >>> # Initializing a CTRL configuration >>> configuration = CTRLConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = CTRLModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrlconfig
#ctrlconfig
.md
326_5
Construct a CTRL tokenizer. Based on Byte-Pair-Encoding. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. Methods: save_vocabulary <frameworkcontent> <pt>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/ctrl.md
https://huggingface.co/docs/transformers/en/model_doc/ctrl/#ctrltokenizer
#ctrltokenizer
.md
326_6