text
stringlengths
5
58.6k
source
stringclasses
470 values
url
stringlengths
49
167
source_section
stringlengths
0
90
file_type
stringclasses
1 value
id
stringlengths
3
6
Constructs a PIX2STRUCT processor which wraps a BERT tokenizer and PIX2STRUCT image processor into a single processor. [`Pix2StructProcessor`] offers all the functionalities of [`Pix2StructImageProcessor`] and [`T5TokenizerFast`]. See the docstring of [`~Pix2StructProcessor.__call__`] and [`~Pix2StructProcessor.decode`] for more information. Args: image_processor (`Pix2StructImageProcessor`): An instance of [`Pix2StructImageProcessor`]. The image processor is a required input. tokenizer (Union[`T5TokenizerFast`, `T5Tokenizer`]): An instance of ['T5TokenizerFast`] or ['T5Tokenizer`]. The tokenizer is a required input.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md
https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structprocessor
#pix2structprocessor
.md
422_6
Constructs a Pix2Struct image processor. Args: do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. According to Pix2Struct paper and code, the image is normalized with its own mean and standard deviation. patch_size (`Dict[str, int]`, *optional*, defaults to `{"height": 16, "width": 16}`): The patch size to use for the image. According to Pix2Struct paper and code, the patch size is 16x16. max_patches (`int`, *optional*, defaults to 2048): The maximum number of patches to extract from the image as per the [Pix2Struct paper](https://arxiv.org/pdf/2210.03347.pdf). is_vqa (`bool`, *optional*, defaults to `False`): Whether or not the image processor is for the VQA task. If `True` and `header_text` is passed in, text is rendered onto the input images. Methods: preprocess
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md
https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structimageprocessor
#pix2structimageprocessor
.md
422_7
The standalone text decoder of Pix2Struct The Pix2Struct model was proposed in [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. It's an encoder decoder transformer pre-trained in a image-to-text setting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config (Union[`Pix2StructConfig`, `Pix2StructTextConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md
https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structtextmodel
#pix2structtextmodel
.md
422_8
The bare Pix2StructVision Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Pix2StructConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md
https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structvisionmodel
#pix2structvisionmodel
.md
422_9
A conditional generation model with a language modeling head. Can be used for sequence generation tasks. The Pix2Struct model was proposed in [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. It's an encoder decoder transformer pre-trained in a image-to-text setting. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config (Union[`Pix2StructConfig`, `Pix2StructTextConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pix2struct.md
https://huggingface.co/docs/transformers/en/model_doc/pix2struct/#pix2structforconditionalgeneration
#pix2structforconditionalgeneration
.md
422_10
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md
https://huggingface.co/docs/transformers/en/model_doc/mamba2/
.md
423_0
The Mamba2 model was proposed in [Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality](https://arxiv.org/abs/2405.21060) by Tri Dao and Albert Gu. It is a State Space Model similar to Mamba 1, with better performances in a simplified architecture. The abstract from the paper is the following: *While Transformers have been the main architecture behind deep learning's success in language modeling, state-space models (SSMs) such as Mamba have recently been shown to match or outperform Transformers at small to medium scale. We show that these families of models are actually quite closely related, and develop a rich framework of theoretical connections between SSMs and variants of attention, connected through various decompositions of a well-studied class of structured semiseparable matrices. Our state space duality (SSD) framework allows us to design a new architecture (Mamba-2) whose core layer is an a refinement of Mamba's selective SSM that is 2-8X faster, while continuing to be competitive with Transformers on language modeling.* Tips: This version should support all implementations of Mamba 2, and in particular [Mamba-2 codestral](https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1) from Mistral AI. In particular, mamba 2 codestral was released with a number of `groups` equal to 8, which can be thought intuitively as similar to the number of kv heads in an attention-based model. This model has two different forward passes, `torch_forward` or `cuda_kernels_forward`. The latter uses the original cuda kernels if they are found in your environment, and is slower on the prefill i.e. requires a "warmup run" due to high cpu overhead, see [here](https://github.com/state-spaces/mamba/issues/389#issuecomment-2171755306) and [also here](https://github.com/state-spaces/mamba/issues/355#issuecomment-2147597457). Without compilation, the `torch_forward` implementation is faster by a factor 3 to 4. Further, there are no positional embeddings in this model, but there is an `attention_mask` and a specific logic to mask out hidden states in two places in the case of batched generation, see [here](https://github.com/state-spaces/mamba/issues/66#issuecomment-1863563829) as well. Due to this, in addition to the reimplementation of mamba2 kernels, batched generation and cached generation are expected to have slight discrepancies. Further, the results given by the cuda kernels or the torch forward are expected to be slightly different. The SSM algorithm heavily relies on tensor contractions, which have matmul equivalents but the order of operations is slightly different, making the difference greater at smaller precisions. Another note, shutdown of hidden states corresponding to padding tokens is done in 2 places and mostly has been tested with left-padding. Right-padding will propagate noise down the line and is not guaranteed to yield satisfactory results. `tokenizer.padding_side = "left"` ensures you are using the correct padding side. This model was contributed by [Molbap](https://huggingface.co/Molbap), with tremendous help from [Anton Vlasjuk](https://github.com/vasqu). The original code can be found [here](https://github.com/state-spaces/mamba).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md
https://huggingface.co/docs/transformers/en/model_doc/mamba2/#overview
#overview
.md
423_1
```python from transformers import Mamba2Config, Mamba2ForCausalLM, AutoTokenizer import torch model_id = 'mistralai/Mamba-Codestral-7B-v0.1' tokenizer = AutoTokenizer.from_pretrained(model_id, revision='refs/pr/9', from_slow=True, legacy=False) model = Mamba2ForCausalLM.from_pretrained(model_id, revision='refs/pr/9') input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"] out = model.generate(input_ids, max_new_tokens=10) print(tokenizer.batch_decode(out)) ``` Here's a draft script for finetuning: ```python from trl import SFTTrainer from peft import LoraConfig from transformers import AutoTokenizer, Mamba2ForCausalLM, TrainingArguments model_id = 'mistralai/Mamba-Codestral-7B-v0.1' tokenizer = AutoTokenizer.from_pretrained(model_id, revision='refs/pr/9', from_slow=True, legacy=False) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "left" #enforce padding side left model = Mamba2ForCausalLM.from_pretrained(model_id, revision='refs/pr/9') dataset = load_dataset("Abirate/english_quotes", split="train") # Without CUDA kernels, batch size of 2 occupies one 80GB device # but precision can be reduced. # Experiments and trials welcome! training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=2, logging_dir='./logs', logging_steps=10, learning_rate=2e-3 ) lora_config = LoraConfig( r=8, target_modules=["embeddings", "in_proj", "out_proj"], task_type="CAUSAL_LM", bias="none" ) trainer = SFTTrainer( model=model, tokenizer=tokenizer, args=training_args, peft_config=lora_config, train_dataset=dataset, dataset_text_field="quote", ) trainer.train() ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md
https://huggingface.co/docs/transformers/en/model_doc/mamba2/#a-simple-generation-example
#a-simple-generation-example
.md
423_2
This is the configuration class to store the configuration of a [`Mamba2Model`]. It is used to instantiate a MAMBA2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MAMBA2 [state-spaces/mamba2-2.8b](https://huggingface.co/state-spaces/mamba2-2.8b) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_heads (`int`, *optional*, defaults to 128): Number of heads for the evolution matrices of mamba 2. head_dim (`int`, *optional*, defaults to 64): Dimension of each head. vocab_size (`int`, *optional*, defaults to 32768): Vocabulary size of the MAMBA2 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Mamba2Model`]. hidden_size (`int`, *optional*, defaults to 4096): Dimensionality of the embeddings and hidden states. state_size (`int`, *optional*, defaults to 128): shape of the state space latents. num_hidden_layers (`int`, *optional*, defaults to 64): Number of hidden layers in the model. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon to use in the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 1): Padding token id. bos_token_id (`int`, *optional*, defaults to 0): The id of the beginning of sentence token in the vocabulary. eos_token_id (`int`, *optional*, defaults to 2): The id of the end of sentence token in the vocabulary. expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size. conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel. n_groups (`int`, *optional*, defaults to 8): Number of groups for the evolution matrices of mamba 2. use_bias (`bool`, *optional*, defaults to `False`): Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block use_conv_bias (`bool`, *optional*, defaults to `True`): Whether or not to use bias in the convolution layer of the mixer block. hidden_act (`str`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. initializer_range (`float`, *optional*, defaults to 0.1): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. residual_in_fp32 (`bool`, *optional*, defaults to `True`): Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`): Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)` time_step_min (`float`, *optional*, defaults to 0.001): Minimum `time_step` used to bound `dt_proj.bias`. time_step_max (`float`, *optional*, defaults to 0.1): Maximum `time_step` used to bound `dt_proj.bias`. time_step_floor (`float`, *optional*, defaults to 0.0001): Minimum clamping value of the `dt_proj.bias` layer initialization. time_step_limit (`tuple`, *optional*, defaults to `(0.0, inf)`): Accepted range of time step values. rescale_prenorm_residual (`bool`, *optional*, defaults to `False`): Whether or not to rescale `out_proj` weights when initializing. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the cache should be used. rms_norm (`bool`, *optional*, defaults to `True`): Whether to use RMS norm or not. chunk_size (`int`, *optional*, defaults to 256): Size of the chunks that will comprise the sequence. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether to tie word embeddings or not. Example: ```python >>> from transformers import Mamba2Config, Mamba2Model >>> # Initializing a Mamba2 configuration >>> configuration = Mamba2Config() >>> # Initializing a model (with random weights) from the configuration >>> model = Mamba2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md
https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2config
#mamba2config
.md
423_3
The bare MAMBA2 Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Mamba2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md
https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2model
#mamba2model
.md
423_4
The MAMBA2 Model transformer with a language modeling head on top (linear layer with weights not tied to the input embeddings). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Mamba2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mamba2.md
https://huggingface.co/docs/transformers/en/model_doc/mamba2/#mamba2lmheadmodel
#mamba2lmheadmodel
.md
423_5
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md
https://huggingface.co/docs/transformers/en/internal/modeling_utils/
.md
424_0
This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling. Most of those are only useful if you are studying the code of the models in the library.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md
https://huggingface.co/docs/transformers/en/internal/modeling_utils/#custom-layers-and-utilities
#custom-layers-and-utilities
.md
424_1
pytorch_utils.Conv1D 1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2). Basically works like a linear layer but the weights are transposed. Args: nf (`int`): The number of output features. nx (`int`): The number of input features. modeling_utils.PoolerStartLogits Compute SQuAD start logits from sequence hidden states. Args: config ([`PretrainedConfig`]): The config used by the model, will be used to grab the `hidden_size` of the model. - forward modeling_utils.PoolerEndLogits Compute SQuAD end logits from sequence hidden states. Args: config ([`PretrainedConfig`]): The config used by the model, will be used to grab the `hidden_size` of the model and the `layer_norm_eps` to use. - forward modeling_utils.PoolerAnswerClass Compute SQuAD 2.0 answer class from classification and start tokens hidden states. Args: config ([`PretrainedConfig`]): The config used by the model, will be used to grab the `hidden_size` of the model. - forward modeling_utils.SquadHeadOutput Base class for outputs of question answering models using a [`~modeling_utils.SQuADHead`]. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned if both `start_positions` and `end_positions` are provided): Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses. start_top_log_probs (`torch.FloatTensor` of shape `(batch_size, config.start_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided): Log probabilities for the top config.start_n_top start token possibilities (beam-search). start_top_index (`torch.LongTensor` of shape `(batch_size, config.start_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided): Indices for the top config.start_n_top start token possibilities (beam-search). end_top_log_probs (`torch.FloatTensor` of shape `(batch_size, config.start_n_top * config.end_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided): Log probabilities for the top `config.start_n_top * config.end_n_top` end token possibilities (beam-search). end_top_index (`torch.LongTensor` of shape `(batch_size, config.start_n_top * config.end_n_top)`, *optional*, returned if `start_positions` or `end_positions` is not provided): Indices for the top `config.start_n_top * config.end_n_top` end token possibilities (beam-search). cls_logits (`torch.FloatTensor` of shape `(batch_size,)`, *optional*, returned if `start_positions` or `end_positions` is not provided): Log probabilities for the `is_impossible` label of the answers. modeling_utils.SQuADHead A SQuAD head inspired by XLNet. Args: config ([`PretrainedConfig`]): The config used by the model, will be used to grab the `hidden_size` of the model and the `layer_norm_eps` to use. - forward modeling_utils.SequenceSummary Compute a single vector summary of a sequence hidden states. Args: config ([`PretrainedConfig`]): The config used by the model. Relevant arguments in the config class of the model are (refer to the actual config class of your model for the default values it uses): - **summary_type** (`str`) -- The method to use to make this summary. Accepted values are: - `"last"` -- Take the last token hidden state (like XLNet) - `"first"` -- Take the first token hidden state (like Bert) - `"mean"` -- Take the mean of all tokens hidden states - `"cls_index"` -- Supply a Tensor of classification token position (GPT/GPT-2) - `"attn"` -- Not implemented now, use multi-head attention - **summary_use_proj** (`bool`) -- Add a projection after the vector extraction. - **summary_proj_to_labels** (`bool`) -- If `True`, the projection outputs to `config.num_labels` classes (otherwise to `config.hidden_size`). - **summary_activation** (`Optional[str]`) -- Set to `"tanh"` to add a tanh activation to the output, another string or `None` will add no activation. - **summary_first_dropout** (`float`) -- Optional dropout probability before the projection and activation. - **summary_last_dropout** (`float`)-- Optional dropout probability after the projection and activation. - forward
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md
https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-custom-modules
#pytorch-custom-modules
.md
424_2
pytorch_utils.apply_chunking_to_forward This function chunks the `input_tensors` into smaller input tensor parts of size `chunk_size` over the dimension `chunk_dim`. It then applies a layer `forward_fn` to each chunk independently to save memory. If the `forward_fn` is independent across the `chunk_dim` this function will yield the same result as directly applying `forward_fn` to `input_tensors`. Args: forward_fn (`Callable[..., torch.Tensor]`): The forward function of the model. chunk_size (`int`): The chunk size of a chunked tensor: `num_chunks = len(input_tensors[0]) / chunk_size`. chunk_dim (`int`): The dimension over which the `input_tensors` should be chunked. input_tensors (`Tuple[torch.Tensor]`): The input tensors of `forward_fn` which will be chunked Returns: `torch.Tensor`: A tensor with the same shape as the `forward_fn` would have given if applied`. Examples: ```python # rename the usual forward() fn to forward_chunk() def forward_chunk(self, hidden_states): hidden_states = self.decoder(hidden_states) return hidden_states # implement a chunked forward function def forward(self, hidden_states): return apply_chunking_to_forward(self.forward_chunk, self.chunk_size_lm_head, self.seq_len_dim, hidden_states) ``` pytorch_utils.find_pruneable_heads_and_indices Finds the heads and their indices taking `already_pruned_heads` into account. Args: heads (`List[int]`): List of the indices of heads to prune. n_heads (`int`): The number of heads in the model. head_size (`int`): The size of each head. already_pruned_heads (`Set[int]`): A set of already pruned heads. Returns: `Tuple[Set[int], torch.LongTensor]`: A tuple with the indices of heads to prune taking `already_pruned_heads` into account and the indices of rows/columns to keep in the layer weight. pytorch_utils.prune_layer Prune a Conv1D or linear layer to keep only entries in index. Used to remove heads. Args: layer (`Union[torch.nn.Linear, Conv1D]`): The layer to prune. index (`torch.LongTensor`): The indices to keep in the layer. dim (`int`, *optional*): The dimension on which to keep the indices. Returns: `torch.nn.Linear` or [`~pytorch_utils.Conv1D`]: The pruned layer as a new layer with `requires_grad=True`. pytorch_utils.prune_conv1d_layer Prune a Conv1D layer to keep only entries in index. A Conv1D work as a Linear layer (see e.g. BERT) but the weights are transposed. Used to remove heads. Args: layer ([`~pytorch_utils.Conv1D`]): The layer to prune. index (`torch.LongTensor`): The indices to keep in the layer. dim (`int`, *optional*, defaults to 1): The dimension on which to keep the indices. Returns: [`~pytorch_utils.Conv1D`]: The pruned layer as a new layer with `requires_grad=True`. pytorch_utils.prune_linear_layer Prune a linear layer to keep only entries in index. Used to remove heads. Args: layer (`torch.nn.Linear`): The layer to prune. index (`torch.LongTensor`): The indices to keep in the layer. dim (`int`, *optional*, defaults to 0): The dimension on which to keep the indices. Returns: `torch.nn.Linear`: The pruned layer as a new layer with `requires_grad=True`.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md
https://huggingface.co/docs/transformers/en/internal/modeling_utils/#pytorch-helper-functions
#pytorch-helper-functions
.md
424_3
[[autodoc]] modeling_tf_utils.TFConv1D: No module named 'h5py' [[autodoc]] modeling_tf_utils.TFSequenceSummary: No module named 'h5py'
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md
https://huggingface.co/docs/transformers/en/internal/modeling_utils/#tensorflow-custom-layers
#tensorflow-custom-layers
.md
424_4
[[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss: No module named 'h5py' [[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss: No module named 'h5py' [[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss: No module named 'h5py' [[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss: No module named 'h5py' [[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss: No module named 'h5py' [[autodoc]] modeling_tf_utils.TFTokenClassificationLoss: No module named 'h5py'
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md
https://huggingface.co/docs/transformers/en/internal/modeling_utils/#tensorflow-loss-functions
#tensorflow-loss-functions
.md
424_5
[[autodoc]] modeling_tf_utils.get_initializer: No module named 'h5py' [[autodoc]] modeling_tf_utils.keras_serializable: No module named 'h5py' [[autodoc]] modeling_tf_utils.shape_list: No module named 'h5py'
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/modeling_utils.md
https://huggingface.co/docs/transformers/en/internal/modeling_utils/#tensorflow-helper-functions
#tensorflow-helper-functions
.md
424_6
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md
https://huggingface.co/docs/transformers/en/internal/file_utils/
.md
425_0
This page lists all of Transformers general utility functions that are found in the file `utils.py`. Most of those are only useful if you are studying the general code in the library.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md
https://huggingface.co/docs/transformers/en/internal/file_utils/#general-utilities
#general-utilities
.md
425_1
utils.ExplicitEnum Enum with more explicit error message for missing values. utils.PaddingStrategy Possible values for the `padding` argument in [`PreTrainedTokenizerBase.__call__`]. Useful for tab-completion in an IDE. utils.TensorType Possible values for the `return_tensors` argument in [`PreTrainedTokenizerBase.__call__`]. Useful for tab-completion in an IDE.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md
https://huggingface.co/docs/transformers/en/internal/file_utils/#enums-and-namedtuples
#enums-and-namedtuples
.md
425_2
utils.add_start_docstrings utils.add_start_docstrings_to_model_forward utils.add_end_docstrings utils.add_code_sample_docstrings utils.replace_return_docstrings
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md
https://huggingface.co/docs/transformers/en/internal/file_utils/#special-decorators
#special-decorators
.md
425_3
utils.cached_property Descriptor that mimics @property but caches output in member variable. From tensorflow_datasets Built-in in functools from Python 3.8.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md
https://huggingface.co/docs/transformers/en/internal/file_utils/#special-properties
#special-properties
.md
425_4
utils._LazyModule Module class that surfaces all objects but only performs associated imports when the objects are requested.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/file_utils.md
https://huggingface.co/docs/transformers/en/internal/file_utils/#other-utilities
#other-utilities
.md
425_5
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md
https://huggingface.co/docs/transformers/en/internal/audio_utils/
.md
426_0
This page lists all the utility functions that can be used by the audio [`FeatureExtractor`] in order to compute special features from a raw audio using common algorithms such as *Short Time Fourier Transform* or *log mel spectrogram*. Most of those are only useful if you are studying the code of the audio processors in the library.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md
https://huggingface.co/docs/transformers/en/internal/audio_utils/#utilities-for-featureextractors
#utilities-for-featureextractors
.md
426_1
audio_utils.hertz_to_mel Convert frequency from hertz to mels. Args: freq (`float` or `np.ndarray`): The frequency, or multiple frequencies, in hertz (Hz). mel_scale (`str`, *optional*, defaults to `"htk"`): The mel frequency scale to use, `"htk"`, `"kaldi"` or `"slaney"`. Returns: `float` or `np.ndarray`: The frequencies on the mel scale. audio_utils.mel_to_hertz Convert frequency from mels to hertz. Args: mels (`float` or `np.ndarray`): The frequency, or multiple frequencies, in mels. mel_scale (`str`, *optional*, `"htk"`): The mel frequency scale to use, `"htk"`, `"kaldi"` or `"slaney"`. Returns: `float` or `np.ndarray`: The frequencies in hertz. audio_utils.mel_filter_bank Creates a frequency bin conversion matrix used to obtain a mel spectrogram. This is called a *mel filter bank*, and various implementation exist, which differ in the number of filters, the shape of the filters, the way the filters are spaced, the bandwidth of the filters, and the manner in which the spectrum is warped. The goal of these features is to approximate the non-linear human perception of the variation in pitch with respect to the frequency. Different banks of mel filters were introduced in the literature. The following variations are supported: - MFCC FB-20: introduced in 1980 by Davis and Mermelstein, it assumes a sampling frequency of 10 kHz and a speech bandwidth of `[0, 4600]` Hz. - MFCC FB-24 HTK: from the Cambridge HMM Toolkit (HTK) (1995) uses a filter bank of 24 filters for a speech bandwidth of `[0, 8000]` Hz. This assumes sampling rate β‰₯ 16 kHz. - MFCC FB-40: from the Auditory Toolbox for MATLAB written by Slaney in 1998, assumes a sampling rate of 16 kHz and speech bandwidth of `[133, 6854]` Hz. This version also includes area normalization. - HFCC-E FB-29 (Human Factor Cepstral Coefficients) of Skowronski and Harris (2004), assumes a sampling rate of 12.5 kHz and speech bandwidth of `[0, 6250]` Hz. This code is adapted from *torchaudio* and *librosa*. Note that the default parameters of torchaudio's `melscale_fbanks` implement the `"htk"` filters while librosa uses the `"slaney"` implementation. Args: num_frequency_bins (`int`): Number of frequencies used to compute the spectrogram (should be the same as in `stft`). num_mel_filters (`int`): Number of mel filters to generate. min_frequency (`float`): Lowest frequency of interest in Hz. max_frequency (`float`): Highest frequency of interest in Hz. This should not exceed `sampling_rate / 2`. sampling_rate (`int`): Sample rate of the audio waveform. norm (`str`, *optional*): If `"slaney"`, divide the triangular mel weights by the width of the mel band (area normalization). mel_scale (`str`, *optional*, defaults to `"htk"`): The mel frequency scale to use, `"htk"`, `"kaldi"` or `"slaney"`. triangularize_in_mel_space (`bool`, *optional*, defaults to `False`): If this option is enabled, the triangular filter is applied in mel space rather than frequency space. This should be set to `true` in order to get the same results as `torchaudio` when computing mel filters. Returns: `np.ndarray` of shape (`num_frequency_bins`, `num_mel_filters`): Triangular filter bank matrix. This is a projection matrix to go from a spectrogram to a mel spectrogram. audio_utils.optimal_fft_length Finds the best FFT input size for a given `window_length`. This function takes a given window length and, if not already a power of two, rounds it up to the next power or two. The FFT algorithm works fastest when the length of the input is a power of two, which may be larger than the size of the window or analysis frame. For example, if the window is 400 samples, using an FFT input size of 512 samples is more optimal than an FFT size of 400 samples. Using a larger FFT size does not affect the detected frequencies, it simply gives a higher frequency resolution (i.e. the frequency bins are smaller). audio_utils.window_function Returns an array containing the specified window. This window is intended to be used with `stft`. The following window types are supported: - `"boxcar"`: a rectangular window - `"hamming"`: the Hamming window - `"hann"`: the Hann window - `"povey"`: the Povey window Args: window_length (`int`): The length of the window in samples. name (`str`, *optional*, defaults to `"hann"`): The name of the window function. periodic (`bool`, *optional*, defaults to `True`): Whether the window is periodic or symmetric. frame_length (`int`, *optional*): The length of the analysis frames in samples. Provide a value for `frame_length` if the window is smaller than the frame length, so that it will be zero-padded. center (`bool`, *optional*, defaults to `True`): Whether to center the window inside the FFT buffer. Only used when `frame_length` is provided. Returns: `np.ndarray` of shape `(window_length,)` or `(frame_length,)` containing the window. audio_utils.spectrogram Calculates a spectrogram over one waveform using the Short-Time Fourier Transform. This function can create the following kinds of spectrograms: - amplitude spectrogram (`power = 1.0`) - power spectrogram (`power = 2.0`) - complex-valued spectrogram (`power = None`) - log spectrogram (use `log_mel` argument) - mel spectrogram (provide `mel_filters`) - log-mel spectrogram (provide `mel_filters` and `log_mel`) How this works: 1. The input waveform is split into frames of size `frame_length` that are partially overlapping by `frame_length - hop_length` samples. 2. Each frame is multiplied by the window and placed into a buffer of size `fft_length`. 3. The DFT is taken of each windowed frame. 4. The results are stacked into a spectrogram. We make a distinction between the following "blocks" of sample data, each of which may have a different lengths: - The analysis frame. This is the size of the time slices that the input waveform is split into. - The window. Each analysis frame is multiplied by the window to avoid spectral leakage. - The FFT input buffer. The length of this determines how many frequency bins are in the spectrogram. In this implementation, the window is assumed to be zero-padded to have the same size as the analysis frame. A padded window can be obtained from `window_function()`. The FFT input buffer may be larger than the analysis frame, typically the next power of two. Note: This function is not optimized for speed yet. It should be mostly compatible with `librosa.stft` and `torchaudio.functional.transforms.Spectrogram`, although it is more flexible due to the different ways spectrograms can be constructed. Args: waveform (`np.ndarray` of shape `(length,)`): The input waveform. This must be a single real-valued, mono waveform. window (`np.ndarray` of shape `(frame_length,)`): The windowing function to apply, including zero-padding if necessary. The actual window length may be shorter than `frame_length`, but we're assuming the array has already been zero-padded. frame_length (`int`): The length of the analysis frames in samples. With librosa this is always equal to `fft_length` but we also allow smaller sizes. hop_length (`int`): The stride between successive analysis frames in samples. fft_length (`int`, *optional*): The size of the FFT buffer in samples. This determines how many frequency bins the spectrogram will have. For optimal speed, this should be a power of two. If `None`, uses `frame_length`. power (`float`, *optional*, defaults to 1.0): If 1.0, returns the amplitude spectrogram. If 2.0, returns the power spectrogram. If `None`, returns complex numbers. center (`bool`, *optional*, defaults to `True`): Whether to pad the waveform so that frame `t` is centered around time `t * hop_length`. If `False`, frame `t` will start at time `t * hop_length`. pad_mode (`str`, *optional*, defaults to `"reflect"`): Padding mode used when `center` is `True`. Possible values are: `"constant"` (pad with zeros), `"edge"` (pad with edge values), `"reflect"` (pads with mirrored values). onesided (`bool`, *optional*, defaults to `True`): If True, only computes the positive frequencies and returns a spectrogram containing `fft_length // 2 + 1` frequency bins. If False, also computes the negative frequencies and returns `fft_length` frequency bins. preemphasis (`float`, *optional*) Coefficient for a low-pass filter that applies pre-emphasis before the DFT. mel_filters (`np.ndarray` of shape `(num_freq_bins, num_mel_filters)`, *optional*): The mel filter bank. If supplied, applies a this filter bank to create a mel spectrogram. mel_floor (`float`, *optional*, defaults to 1e-10): Minimum value of mel frequency banks. log_mel (`str`, *optional*): How to convert the spectrogram to log scale. Possible options are: `None` (don't convert), `"log"` (take the natural logarithm) `"log10"` (take the base-10 logarithm), `"dB"` (convert to decibels). Can only be used when `power` is not `None`. reference (`float`, *optional*, defaults to 1.0): Sets the input spectrogram value that corresponds to 0 dB. For example, use `np.max(spectrogram)` to set the loudest part to 0 dB. Must be greater than zero. min_value (`float`, *optional*, defaults to `1e-10`): The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking `log(0)`. For a power spectrogram, the default of `1e-10` corresponds to a minimum of -100 dB. For an amplitude spectrogram, the value `1e-5` corresponds to -100 dB. Must be greater than zero. db_range (`float`, *optional*): Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero. remove_dc_offset (`bool`, *optional*): Subtract mean from waveform on each frame, applied before pre-emphasis. This should be set to `true` in order to get the same results as `torchaudio.compliance.kaldi.fbank` when computing mel filters. dtype (`np.dtype`, *optional*, defaults to `np.float32`): Data type of the spectrogram tensor. If `power` is None, this argument is ignored and the dtype will be `np.complex64`. Returns: `nd.array` containing a spectrogram of shape `(num_frequency_bins, length)` for a regular spectrogram or shape `(num_mel_filters, length)` for a mel spectrogram. audio_utils.power_to_db Converts a power spectrogram to the decibel scale. This computes `10 * log10(spectrogram / reference)`, using basic logarithm properties for numerical stability. The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it. This means that large variations in energy may not sound all that different if the sound is loud to begin with. This compression operation makes the (mel) spectrogram features match more closely what humans actually hear. Based on the implementation of `librosa.power_to_db`. Args: spectrogram (`np.ndarray`): The input power (mel) spectrogram. Note that a power spectrogram has the amplitudes squared! reference (`float`, *optional*, defaults to 1.0): Sets the input spectrogram value that corresponds to 0 dB. For example, use `np.max(spectrogram)` to set the loudest part to 0 dB. Must be greater than zero. min_value (`float`, *optional*, defaults to `1e-10`): The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking `log(0)`. The default of `1e-10` corresponds to a minimum of -100 dB. Must be greater than zero. db_range (`float`, *optional*): Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero. Returns: `np.ndarray`: the spectrogram in decibels audio_utils.amplitude_to_db Converts an amplitude spectrogram to the decibel scale. This computes `20 * log10(spectrogram / reference)`, using basic logarithm properties for numerical stability. The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it. This means that large variations in energy may not sound all that different if the sound is loud to begin with. This compression operation makes the (mel) spectrogram features match more closely what humans actually hear. Args: spectrogram (`np.ndarray`): The input amplitude (mel) spectrogram. reference (`float`, *optional*, defaults to 1.0): Sets the input spectrogram value that corresponds to 0 dB. For example, use `np.max(spectrogram)` to set the loudest part to 0 dB. Must be greater than zero. min_value (`float`, *optional*, defaults to `1e-5`): The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking `log(0)`. The default of `1e-5` corresponds to a minimum of -100 dB. Must be greater than zero. db_range (`float`, *optional*): Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero. Returns: `np.ndarray`: the spectrogram in decibels
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/audio_utils.md
https://huggingface.co/docs/transformers/en/internal/audio_utils/#audio-transformations
#audio-transformations
.md
426_2
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/
.md
427_0
This page lists all the utility functions used by [`~generation.GenerationMixin.generate`].
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#utilities-for-generation
#utilities-for-generation
.md
427_1
The output of [`~generation.GenerationMixin.generate`] is an instance of a subclass of [`~utils.ModelOutput`]. This output is a data structure containing all the information returned by [`~generation.GenerationMixin.generate`], but that can also be used as tuple or dictionary. Here's an example: ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2") model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2") inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt") generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True) ``` The `generation_output` object is a [`~generation.GenerateDecoderOnlyOutput`], as we can see in the documentation of that class below, it means it has the following attributes: - `sequences`: the generated sequences of tokens - `scores` (optional): the prediction scores of the language modelling head, for each generation step - `hidden_states` (optional): the hidden states of the model, for each generation step - `attentions` (optional): the attention weights of the model, for each generation step Here we have the `scores` since we passed along `output_scores=True`, but we don't have `hidden_states` and `attentions` because we didn't pass `output_hidden_states=True` or `output_attentions=True`. You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get `None`. Here for instance `generation_output.scores` are all the generated prediction scores of the language modeling head, and `generation_output.attentions` is `None`. When using our `generation_output` object as a tuple, it only keeps the attributes that don't have `None` values. Here, for instance, it has two elements, `loss` then `logits`, so ```python generation_output[:2] ``` will return the tuple `(generation_output.sequences, generation_output.scores)` for instance. When using our `generation_output` object as a dictionary, it only keeps the attributes that don't have `None` values. Here, for instance, it has two keys that are `sequences` and `scores`. We document here all output types.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#generate-outputs
#generate-outputs
.md
427_2
generation.GenerateDecoderOnlyOutput Outputs of decoder-only generation models, when using non-beam methods. Args: sequences (`torch.LongTensor` of shape `(batch_size, sequence_length)`): The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early due to the `eos_token_id`. scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`): Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`. hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, generated_length, hidden_size)`. past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True`): Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model's documentation. Usually, a [`~cache_utils.Cache`] instance. generation.GenerateEncoderDecoderOutput Outputs of encoder-decoder generation models, when using non-beam methods. Args: sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`): The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early due to the `eos_token_id`. scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`): Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer of the decoder) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. decoder_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`. cross_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`. decoder_hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, generated_length, hidden_size)`. past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model's documentation. Usually, a [`~cache_utils.Cache`] instance. generation.GenerateBeamDecoderOnlyOutput Outputs of decoder-only generation models, when using beam methods. Args: sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`): The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early due to the `eos_token_id`. sequences_scores (`torch.FloatTensor` of shape `(batch_size*num_return_sequences)`, *optional*, returned when `output_scores=True`): Final beam scores of the generated `sequences`. scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`): Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size*num_beams, config.vocab_size)`. logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`): Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. beam_indices (`torch.LongTensor`, *optional*, returned when `output_scores=True`): Beam indices of generated token id at each generation step. `torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`. attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size*num_beams, num_heads, generated_length, sequence_length)`. hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)`. past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True`): Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model's documentation. Usually, a [`~cache_utils.Cache`] instance. generation.GenerateBeamEncoderDecoderOutput Outputs of encoder-decoder generation models, when using beam methods. Args: sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`): The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early due to the `eos_token_id`. sequences_scores (`torch.FloatTensor` of shape `(batch_size*num_return_sequences)`, *optional*, returned when `output_scores=True`): Final beam scores of the generated `sequences`. scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`): Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size*num_beams, config.vocab_size)`. logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`): Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. beam_indices (`torch.LongTensor`, *optional*, returned when `output_scores=True`): Beam indices of generated token id at each generation step. `torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`. encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer of the decoder) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size*num_beams*num_return_sequences, sequence_length, hidden_size)`. decoder_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, num_heads, generated_length, sequence_length)`. cross_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`. decoder_hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)`. past_key_values (`tuple(tuple(torch.FloatTensor)))`, *optional*, returned when `use_cache=True`): Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model's documentation. Usually, a [`~cache_utils.Cache`] instance.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch
#pytorch
.md
427_3
[[autodoc]] generation.TFGreedySearchEncoderDecoderOutput: module transformers.generation has no attribute TFGreedySearchEncoderDecoderOutput [[autodoc]] generation.TFGreedySearchDecoderOnlyOutput: module transformers.generation has no attribute TFGreedySearchDecoderOnlyOutput [[autodoc]] generation.TFSampleEncoderDecoderOutput: module transformers.generation has no attribute TFSampleEncoderDecoderOutput [[autodoc]] generation.TFSampleDecoderOnlyOutput: module transformers.generation has no attribute TFSampleDecoderOnlyOutput [[autodoc]] generation.TFBeamSearchEncoderDecoderOutput: module transformers.generation has no attribute TFBeamSearchEncoderDecoderOutput [[autodoc]] generation.TFBeamSearchDecoderOnlyOutput: module transformers.generation has no attribute TFBeamSearchDecoderOnlyOutput [[autodoc]] generation.TFBeamSampleEncoderDecoderOutput: module transformers.generation has no attribute TFBeamSampleEncoderDecoderOutput [[autodoc]] generation.TFBeamSampleDecoderOnlyOutput: module transformers.generation has no attribute TFBeamSampleDecoderOnlyOutput [[autodoc]] generation.TFContrastiveSearchEncoderDecoderOutput: module transformers.generation has no attribute TFContrastiveSearchEncoderDecoderOutput [[autodoc]] generation.TFContrastiveSearchDecoderOnlyOutput: module transformers.generation has no attribute TFContrastiveSearchDecoderOnlyOutput
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#tensorflow
#tensorflow
.md
427_4
[[autodoc]] generation.FlaxSampleOutput: module transformers.generation has no attribute FlaxSampleOutput [[autodoc]] generation.FlaxGreedySearchOutput: module transformers.generation has no attribute FlaxGreedySearchOutput [[autodoc]] generation.FlaxBeamSearchOutput: module transformers.generation has no attribute FlaxBeamSearchOutput
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#flax
#flax
.md
427_5
A [`LogitsProcessor`] can be used to modify the prediction scores of a language model head for generation.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#logitsprocessor
#logitsprocessor
.md
427_6
[`LogitsProcessor`] enforcing alternated generation between the two codebooks of Bark. <Tip warning={true}> This logits processor is exclusively compatible with [Bark](https://huggingface.co/docs/transformers/en/model_doc/bark)'s fine submodel. See the model documentation for examples. </Tip> Args: input_start_len (`int`): The length of the initial input sequence. semantic_vocab_size (`int`): Vocabulary size of the semantic part, i.e number of tokens associated to the semantic vocabulary. codebook_size (`int`): Number of tokens associated to the codebook. - __call__ [`LogitsProcessor`] for classifier free guidance (CFG). The scores are split over the batch dimension, where the first half correspond to the conditional logits (predicted from the input prompt) and the second half correspond to the unconditional logits (predicted from an empty or 'null' prompt). The processor computes a weighted average across the conditional and unconditional logits, parameterised by the `guidance_scale`. See [the paper](https://arxiv.org/abs/2306.05284) for more information. <Tip warning={true}> This logits processor is exclusively compatible with [MusicGen](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen) </Tip> Args: guidance_scale (float): The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. Examples: ```python >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small") >>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") >>> inputs = processor( ... text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` - __call__ [`LogitsProcessor`] that works similarly to [`NoRepeatNGramLogitsProcessor`], but applied exclusively to prevent the repetition of n-grams present in the prompt. It was designed to promote chattiness in a language model, by preventing the generation of n-grams present in previous conversation rounds. Args: encoder_ngram_size (`int`): All ngrams of size `ngram_size` can only occur within the encoder input ids. encoder_input_ids (`int`): The encoder_input_ids that should not be repeated within the decoder ids. Examples: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m") >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m") >>> inputs = tokenizer("Alice: I love cats. What do you love?\nBob:", return_tensors="pt") >>> # With greedy decoding, we see Bob repeating Alice's opinion. If Bob was a chatbot, it would be a poor one. >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) Alice: I love cats. What do you love? Bob: I love cats. What do you >>> # With this logits processor, we can prevent Bob from repeating Alice's opinion. >>> outputs = model.generate(**inputs, encoder_no_repeat_ngram_size=2) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) Alice: I love cats. What do you love? Bob: My cats are very cute. ``` - __call__ [`LogitsProcessor`] that works similarly to [`RepetitionPenaltyLogitsProcessor`], but with an *inverse* penalty that is applied to the tokens present in the prompt. In other words, a penalty above 1.0 increases the odds of selecting tokens that were present in the prompt. It was designed to avoid hallucination in input-grounded tasks, like summarization. Although originally intended for encoder-decoder models, it can also be used with decoder-only models like LLMs. Args: penalty (`float`): The parameter for repetition penalty. 1.0 means no penalty. Above 1.0 rewards prompt tokens. Between 0.0 and 1.0 penalizes prompt tokens. encoder_input_ids (`torch.LongTensor`): The encoder_input_ids that should be repeated within the decoder ids. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m") >>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m") >>> inputs = tokenizer(["Alice and Bob. The third member's name was"], return_tensors="pt") >>> gen_out = model.generate(**inputs) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) Alice and Bob. The third member's name was not mentioned. >>> # With the `encoder_repetition_penalty` argument we can trigger this logits processor in `generate`, which can >>> # promote the use of prompt tokens ("Bob" in this example) >>> gen_out = model.generate(**inputs, encoder_repetition_penalty=1.2) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) Alice and Bob. The third member's name was Bob. The third member's name was Bob. ``` - __call__ [`LogitsProcessor`] that performs epsilon-sampling, i.e. restricting to tokens with `prob >= epsilon`. Takes the largest min_tokens_to_keep tokens if no tokens satisfy this constraint. See [Truncation Sampling as Language Model Desmoothing](https://arxiv.org/abs/2210.15191) for more information. Args: epsilon (`float`): If set to > 0, only the most tokens with probabilities `epsilon` or higher are kept for generation. filter_value (`float`, *optional*, defaults to -inf): All filtered values will be set to this float value. min_tokens_to_keep (`int`, *optional*, defaults to 1): Minimum number of tokens that cannot be filtered. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(1) >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt") >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample=True) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: 1, 2, 3 | < 4 (left-hand pointer) ; <BLANKLINE> <BLANKLINE> >>> # With epsilon sampling, the output gets restricted to high-probability tokens. Note that this is similar to >>> # Top P sampling, which restricts tokens based on their cumulative probability. >>> # Pro tip: The paper recomends using `epsilon_cutoff` values between 3e-4 and 9e-4 >>> outputs = model.generate(**inputs, do_sample=True, epsilon_cutoff=0.1) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9 ``` - __call__ [`LogitsProcessor`] that performs eta-sampling, a technique to filter out tokens with probabilities below a dynamic cutoff value, `eta`, which is calculated based on a combination of the hyperparameter `epsilon` and the entropy of the token probabilities, i.e. `eta := min(epsilon, sqrt(epsilon * e^-entropy(probabilities)))`. Takes the largest min_tokens_to_keep tokens if no tokens satisfy this constraint. It addresses the issue of poor quality in long samples of text generated by neural language models leading to more coherent and fluent text. See [Truncation Sampling as Language Model Desmoothing](https://arxiv.org/abs/2210.15191) for more information. Note: `do_sample` must be set to `True` for this `LogitsProcessor` to work. Args: epsilon (`float`): A float value in the range (0, 1). Hyperparameter used to calculate the dynamic cutoff value, `eta`. The suggested values from the paper ranges from 3e-4 to 4e-3 depending on the size of the model. filter_value (`float`, *optional*, defaults to -inf): All values that are found to be below the dynamic cutoff value, `eta`, are set to this float value. This parameter is useful when logits need to be modified for very low probability tokens that should be excluded from generation entirely. min_tokens_to_keep (`int`, *optional*, defaults to 1): Specifies the minimum number of tokens that must be kept for generation, regardless of their probabilities. For example, if `min_tokens_to_keep` is set to 1, at least one token will always be kept for generation, even if all tokens have probabilities below the cutoff `eta`. device (`str`, *optional*, defaults to `"cpu"`): The device to allocate the tensors. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(1) >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt") >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample=True) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: 1, 2, 3 | < 4 (left-hand pointer) ; <BLANKLINE> <BLANKLINE> >>> # With eta sampling, the output gets restricted to high-probability tokens. You can see it as a dynamic form of >>> # epsilon sampling that adapts its cutoff probability based on the entropy (high entropy = lower cutoff). >>> # Pro tip: The paper recomends using `eta_cutoff` values between 3e-4 to 4e-3 >>> outputs = model.generate(**inputs, do_sample=True, eta_cutoff=0.1) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9 ``` - __call__ [`LogitsProcessor`] that exponentially increases the score of the `eos_token_id` after `start_index` has been reached. This allows generating shorter sequences without having a hard cutoff, allowing the `eos_token` to be predicted in a meaningful position. Args: exponential_decay_length_penalty (`tuple(int, float)`): This tuple shall consist of: `(start_index, decay_factor)` where `start_index` indicates where penalty starts and `decay_factor` represents the factor of exponential decay eos_token_id (`Union[int, List[int], torch.Tensor]`): The id(s) of the *end-of-sequence* token. input_ids_seq_length (`int`): The length of the input sequence. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> text = "Just wanted to let you know, I" >>> inputs = tokenizer(text, return_tensors="pt") >>> # Let's consider that we want short sentences, so we limit `max_length=30`. However, we observe that the answer >>> # tends to end abruptly. >>> set_seed(1) >>> outputs = model.generate(**inputs, do_sample=True, temperature=0.9, max_length=30, pad_token_id=50256) >>> print(tokenizer.batch_decode(outputs)[0]) Just wanted to let you know, I received a link to an ebook, the book How To Start A Social Network which was published in 2010. Although >>> # To promote the appearance of the EOS token at the right time, we add the `exponential_decay_length_penalty = >>> # (start_index, decay_factor)`. Instead of cutting at max_tokens, the output comes to an end before and usually >>> # with more meaning. What happens is that starting from `start_index` the EOS token score will be increased >>> # by `decay_factor` exponentially. However, if you set a high decay factor, you may also end up with abruptly >>> # ending sequences. >>> set_seed(1) >>> outputs = model.generate( ... **inputs, ... do_sample=True, ... temperature=0.9, ... max_length=30, ... pad_token_id=50256, ... exponential_decay_length_penalty=(15, 1.6), ... ) >>> print(tokenizer.batch_decode(outputs)[0]) Just wanted to let you know, I received a link to an ebook, the book How To Start A Social Network which<|endoftext|> >>> # With a small decay factor, you will have a higher chance of getting a meaningful sequence. >>> set_seed(1) >>> outputs = model.generate( ... **inputs, ... do_sample=True, ... temperature=0.9, ... max_length=30, ... pad_token_id=50256, ... exponential_decay_length_penalty=(15, 1.01), ... ) >>> print(tokenizer.batch_decode(outputs)[0]) Just wanted to let you know, I received a link to an ebook, the book How To Start A Social Network which was published in 2010.<|endoftext|> ``` - __call__ [`LogitsProcessor`] that enforces the specified token as the first generated token. Used with encoder-decoder models. Args: bos_token_id (`int`): The id of the token to force as the first generated token. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") >>> inputs = tokenizer("Translate from English to German: I love cats.", return_tensors="pt") >>> # By default, it continues generating according to the model's logits >>> outputs = model.generate(**inputs, max_new_tokens=10) >>> print(tokenizer.batch_decode(outputs)[0]) <pad> Ich liebe Kitty.</s> >>> # We can use `forced_bos_token_id` to force the start of generation with an encoder-decoder model >>> # (including forcing it to end straight away with an EOS token) >>> outputs = model.generate(**inputs, max_new_tokens=10, forced_bos_token_id=tokenizer.eos_token_id) >>> print(tokenizer.batch_decode(outputs)[0]) <pad></s> ``` - __call__ [`LogitsProcessor`] that enforces the specified token as the last generated token when `max_length` is reached. Args: max_length (`int`): The maximum length of the sequence to be generated. eos_token_id (`Union[int, List[int], torch.Tensor]`): The id(s) of the *end-of-sequence* token. device (`str`, *optional*, defaults to `"cpu"`): The device to allocate the tensors. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> inputs = tokenizer("A sequence: 1, 2, 3", return_tensors="pt") >>> # By default, it continues generating according to the model's logits >>> outputs = model.generate(**inputs, max_new_tokens=10) >>> print(tokenizer.batch_decode(outputs)[0]) A sequence: 1, 2, 3, 4, 5, 6, 7, 8 >>> # `forced_eos_token_id` ensures the generation ends with a EOS token >>> outputs = model.generate(**inputs, max_new_tokens=10, forced_eos_token_id=tokenizer.eos_token_id) >>> print(tokenizer.batch_decode(outputs)[0]) A sequence: 1, 2, 3, 4, 5, 6, 7,<|endoftext|> ``` - __call__ [`LogitsProcessor`] that enforces diverse beam search. Note that this logits processor is only effective for [`PreTrainedModel.group_beam_search`]. See [Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/pdf/1610.02424.pdf) for more details. Traditional beam search often generates very similar sequences across different beams. `HammingDiversityLogitsProcessor` addresses this by penalizing beams that generate tokens already chosen by other beams in the same time step. Args: diversity_penalty (`float`): This value is subtracted from a beam's score if it generates a token same as any beam from other group at a particular time. A higher `diversity_penalty` will enforce greater diversity among the beams. Adjusting this value can help strike a balance between diversity and natural likelihood. num_beams (`int`): Number of beams for beam search. 1 means no beam search. num_beam_groups (`int`): Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams. [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> import torch >>> # Initialize the model and tokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base") >>> # A long text about the solar system >>> text = ( ... "The Solar System is a gravitationally bound system comprising the Sun and the objects that orbit it, " ... "either directly or indirectly. Of the objects that orbit the Sun directly, the largest are the eight " ... "planets, with the remainder being smaller objects, such as the five dwarf planets and small Solar System " ... "bodies. The Solar System formed 4.6 billion years ago from the gravitational collapse of a giant " ... "interstellar molecular cloud." ... ) >>> inputs = tokenizer("summarize: " + text, return_tensors="pt") >>> # Generate diverse summary >>> outputs_diverse = model.generate( ... **inputs, ... num_beam_groups=2, ... diversity_penalty=10.0, ... max_length=100, ... num_beams=4, ... num_return_sequences=2, ... ) >>> summaries_diverse = tokenizer.batch_decode(outputs_diverse, skip_special_tokens=True) >>> # Generate non-diverse summary >>> outputs_non_diverse = model.generate( ... **inputs, ... max_length=100, ... num_beams=4, ... num_return_sequences=2, ... ) >>> summary_non_diverse = tokenizer.batch_decode(outputs_non_diverse, skip_special_tokens=True) >>> # With `diversity_penalty`, the resulting beams are much more diverse >>> print(summary_non_diverse) ['the solar system formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets.', 'the Solar System formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets.'] >>> print(summaries_diverse) ['the solar system formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets.', 'the solar system formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets. the rest of the objects are smaller objects, such as the five dwarf planets and small solar system bodies.'] ``` - __call__ [`LogitsProcessor`] that removes all `nan` and `inf` values to avoid the generation method to fail. Note that using the logits processor should only be used if necessary since it can slow down the generation method. This logits processor has no `generate` example, as there shouldn't be a correct combination of flags that warrants its use. - __call__ [`LogitsProcessor`] for normalizing the scores using log-softmax. It's important to normalize the scores during beam search, after applying the logits processors or warpers, since the search algorithm used in this library doesn't do it (it only does it before, but they may need re-normalization) but it still supposes that the scores are normalized when comparing the hypotheses. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> inputs = tokenizer("A sequence: 1, 2, 3", return_tensors="pt") >>> # By default, the scores are not normalized -- the sum of their exponentials is NOT a normalized probability >>> # distribution, summing to 1 >>> outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True) >>> print(torch.allclose(torch.sum(torch.exp(outputs.scores[-1])), torch.Tensor((1.000,)), rtol=1e-4)) False >>> # Normalizing them may have a positive impact on beam methods, or when using the scores on your application >>> outputs = model.generate(**inputs, renormalize_logits=True, return_dict_in_generate=True, output_scores=True) >>> print(torch.allclose(torch.sum(torch.exp(outputs.scores[-1])), torch.Tensor((1.000,)), rtol=1e-4)) True ``` - __call__ Abstract base class for all logit processors that can be applied during generation. - __call__ Abstract base class for all logit processors that can be applied during generation.List - __call__ [`LogitsProcessor`] enforcing a min-length by setting EOS probability to 0. Note that, for decoder-only models like most LLMs, the length includes the prompt. Args: min_length (`int`): The minimum length below which the score of `eos_token_id` is set to `-float("Inf")`. eos_token_id (`Union[int, List[int], torch.Tensor]`): The id(s) of the *end-of-sequence* token. device (`str`, *optional*, defaults to `"cpu"`): The device to allocate the tensors. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m") >>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m") >>> inputs = tokenizer("A number:", return_tensors="pt") >>> gen_out = model.generate(**inputs) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) A number: one >>> # setting `min_length` to a value smaller than the uncontrolled output length has no impact >>> gen_out = model.generate(**inputs, min_length=3) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) A number: one >>> # setting a larger `min_length` will force the model to generate beyond its natural ending point, which is not >>> # necessarily incorrect >>> gen_out = model.generate(**inputs, min_length=10) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) A number: one thousand, nine hundred and ninety-four ``` - __call__ [`LogitsProcessor`] enforcing a min-length of new tokens by setting EOS (End-Of-Sequence) token probability to 0. Contrarily to [`MinLengthLogitsProcessor`], this processor ignores the prompt. Args: prompt_length_to_skip (`int`): The input tokens length. Not a valid argument when used with `generate` as it will automatically assign the input length. min_new_tokens (`int`): The minimum *new* tokens length below which the score of `eos_token_id` is set to `-float("Inf")`. eos_token_id (`Union[int, List[int], torch.Tensor]`): The id(s) of the *end-of-sequence* token. device (`str`, *optional*, defaults to `"cpu"`): The device to allocate the tensors. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m") >>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m") >>> inputs = tokenizer(["A number:"], return_tensors="pt") >>> gen_out = model.generate(**inputs) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) A number: one >>> # setting `min_new_tokens` will force the model to generate beyond its natural ending point, which is not >>> # necessarily incorrect >>> gen_out = model.generate(**inputs, min_new_tokens=2) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) A number: one thousand ``` - __call__ [`LogitsProcessor`] that performs min-p, i.e. keeps all tokens that are above a minimum probability, scaled by the probability of the most likely token. As a result, the filter becomes more agressive in the presence of high-probability tokens, which is a sign of a confident output that we shouldn't deviate from. Often used together with [`TemperatureLogitsWarper`]. Used as an alternative to [`TopPLogitsWarper`] and [`TopKLogitsWarper`]. Created by @menhguin and @kalomaze (github handles). Code adapted from [this external PR](https://github.com/oobabooga/text-generation-webui/pull/4449/files) Args: min_p (`float`): Minimum token probability, which will be scaled by the probability of the most likely token. It must be a value between 0 and 1. Typical values are in the 0.01-0.2 range, comparably selective as setting `top_p` in the 0.99-0.8 range (use the opposite of normal `top_p` values). filter_value (`float`, *optional*, defaults to -inf): All filtered values will be set to this float value. min_tokens_to_keep (`int`, *optional*, defaults to 1): Minimum number of tokens that cannot be filtered. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(1) >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt") >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample=True) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: 1, 2, 3 | < 4 (left-hand pointer) ; <BLANKLINE> <BLANKLINE> >>> # With `min_p` sampling, the output gets restricted to high-probability tokens. >>> # Pro tip: In practice, LLMs use `min_p` in the 0.01-0.2 range. >>> outputs = model.generate(**inputs, do_sample=True, min_p=0.1) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9 ``` - __call__ [`LogitsProcessor`] that enforces that specified sequences will never be selected. <Tip> In order to get the token ids of the words that should not appear in the generated text, make sure to set `add_prefix_space=True` when initializing the tokenizer, and use `tokenizer(bad_words, add_special_tokens=False).input_ids`. The `add_prefix_space` argument is only supported for some slow tokenizers, as fast tokenizers' prefixing behaviours come from `pre tokenizers`. Read more [here](https://huggingface.co/docs/tokenizers/api/pre-tokenizers). </Tip> Args: bad_words_ids (`List[List[int]]`): List of list of token ids that are not allowed to be generated. eos_token_id (`Union[int, List[int], torch.Tensor]`, *optional*): The id(s) of the *end-of-sequence* token. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> inputs = tokenizer(["In a word, the cake is a"], return_tensors="pt") >>> output_ids = model.generate(inputs["input_ids"], max_new_tokens=5, pad_token_id=tokenizer.eos_token_id) >>> print(tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]) In a word, the cake is a bit of a mess. >>> # Now let's take the bad words out. Please note that the tokenizer is initialized differently >>> tokenizer_with_prefix_space = AutoTokenizer.from_pretrained("openai-community/gpt2", add_prefix_space=True) >>> def get_tokens_as_list(word_list): ... "Converts a sequence of words into a list of tokens" ... tokens_list = [] ... for word in word_list: ... tokenized_word = tokenizer_with_prefix_space([word], add_special_tokens=False).input_ids[0] ... tokens_list.append(tokenized_word) ... return tokens_list >>> bad_words_ids = get_tokens_as_list(word_list=["mess"]) >>> output_ids = model.generate( ... inputs["input_ids"], max_new_tokens=5, bad_words_ids=bad_words_ids, pad_token_id=tokenizer.eos_token_id ... ) >>> print(tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]) In a word, the cake is a bit of a surprise. ``` - __call__ N-grams are groups of "n" consecutive words, characters, or tokens taken from a sequence of text. Given the sentence: "She runs fast", the bi-grams (n=2) would be ("she", "runs") and ("runs", "fast"). In text generation, avoiding repetitions of word sequences provides a more diverse output. This [`LogitsProcessor`] enforces no repetition of n-grams by setting the scores of banned tokens to negative infinity which eliminates those tokens from consideration when further processing the scores. Note that, for decoder-only models like most LLMs, the prompt is also considered to obtain the n-grams. [Fairseq](https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345). <Tip> Use n-gram penalties with care. For instance, penalizing 2-grams (bigrams) in an article about the city of New York might lead to undesirable outcomes where the city's name appears only once in the entire text. [Reference](https://huggingface.co/blog/how-to-generate) </Tip> Args: ngram_size (`int`): All ngrams of size `ngram_size` can only occur once. Examples: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> inputs = tokenizer(["Today I"], return_tensors="pt") >>> output = model.generate(**inputs) >>> print(tokenizer.decode(output[0], skip_special_tokens=True)) Today I’m not sure if I’m going to be able to do it. >>> # Now let's add ngram size using `no_repeat_ngram_size`. This stops the repetitions ("I’m") in the output. >>> output = model.generate(**inputs, no_repeat_ngram_size=2) >>> print(tokenizer.decode(output[0], skip_special_tokens=True)) Today I’m not sure if I can get a better understanding of the nature of this issue ``` - __call__ [`LogitsProcessor`] that enforces constrained generation and is useful for prefix-conditioned constrained generation. See [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904) for more information. Args: prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], List[int]]`): This function constraints the beam search to allowed tokens only at each step. This function takes 2 arguments `inputs_ids` and the batch ID `batch_id`. It has to return a list with the allowed tokens for the next generation step conditioned on the previously generated tokens `inputs_ids` and the batch ID `batch_id`. Examples: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m") >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m") >>> inputs = tokenizer("Alice and Bob", return_tensors="pt") >>> # By default, it continues generating according to the model's logits >>> outputs = model.generate(**inputs, max_new_tokens=5) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) Alice and Bob are friends >>> # We can contrain it with `prefix_allowed_tokens_fn` to force a certain behavior based on a prefix. >>> # For instance, we can force an entire entity to be generated when its beginning is detected. >>> entity = tokenizer(" Bob Marley", return_tensors="pt").input_ids[0] # 3 tokens >>> def prefix_allowed_tokens_fn(batch_id, input_ids): ... ''' ... Attempts to generate 'Bob Marley' when 'Bob' is detected. ... In this case, `batch_id` is not used, but you can set rules for each batch member. ... ''' ... if input_ids[-1] == entity[0]: ... return [entity[1].item()] ... elif input_ids[-2] == entity[0] and input_ids[-1] == entity[1]: ... return [entity[2].item()] ... return list(range(tokenizer.vocab_size)) # If no match, allow all tokens >>> outputs = model.generate(**inputs, max_new_tokens=5, prefix_allowed_tokens_fn=prefix_allowed_tokens_fn) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) Alice and Bob Marley ``` - __call__ [`LogitsProcessor`] that prevents the repetition of previous tokens through a penalty. This penalty is applied at most once per token. Note that, for decoder-only models like most LLMs, the considered tokens include the prompt. In the original [paper](https://arxiv.org/pdf/1909.05858.pdf), the authors suggest the use of a penalty of around 1.2 to achieve a good balance between truthful generation and lack of repetition. To penalize and reduce repetition, use `penalty` values above 1.0, where a higher value penalizes more strongly. To reward and encourage repetition, use `penalty` values between 0.0 and 1.0, where a lower value rewards more strongly. Args: penalty (`float`): The parameter for repetition penalty. 1.0 means no penalty. Above 1.0 penalizes previously generated tokens. Between 0.0 and 1.0 rewards previously generated tokens. Examples: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> # Initializing the model and tokenizer for it >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> inputs = tokenizer(["I'm not going to"], return_tensors="pt") >>> # This shows a normal generate without any specific parameters >>> summary_ids = model.generate(**inputs) >>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True)[0]) I'm not going to be able to do that. I'm going to be able to do that >>> # This generates a penalty for repeated tokens >>> penalized_ids = model.generate(**inputs, repetition_penalty=1.1) >>> print(tokenizer.batch_decode(penalized_ids, skip_special_tokens=True)[0]) I'm not going to be able to do that. I'll just have to go out and play ``` - __call__ [`LogitsProcessor`] that applies an additive bias on sequences. The bias is applied to the last token of a sequence when the next generated token can complete it. Consequently, to take the most of biasing sequences with more than one token, consider using beam methods (to gracefully work around partially completed sequences that have a negative bias) and applying the bias to their prefixes (to ensure the bias is applied earlier). <Tip> In order to get the token ids of the sequences that you want to bias, make sure to set `add_prefix_space=True` when initializing the tokenizer, and use `tokenizer(bad_words, add_special_tokens=False).input_ids`. The `add_prefix_space` argument is only supported for some slow tokenizers, as fast tokenizers' prefixing behaviours come from `pre tokenizers`. Read more [here](https://huggingface.co/docs/tokenizers/api/pre-tokenizers). </Tip> Args: sequence_bias (`List[List[Union[List[int], float]]]`): List of lists that maps a sequence of tokens to its bias term (e.g. `[[[10, 45], -2.0], [[64], -7.5]]`). Positive biases increase the odds of the sequence being selected, while negative biases do the opposite. If a sequence has a length of 1, its bias will always be applied. Otherwise, the bias will only be applied if the sequence in question is about to be completed (in the token selection step after this processor is applied). Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> inputs = tokenizer(["The full name of Donald is Donald"], return_tensors="pt") >>> summary_ids = model.generate(inputs["input_ids"], max_new_tokens=4) >>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True)[0]) The full name of Donald is Donald J. Trump Jr >>> # Now let's control generation through a bias. Please note that the tokenizer is initialized differently! >>> tokenizer_with_prefix_space = AutoTokenizer.from_pretrained("openai-community/gpt2", add_prefix_space=True) >>> def get_tokens(word): ... return tokenizer_with_prefix_space([word], add_special_tokens=False).input_ids[0] >>> # If we add a negative bias without beam search, it may become "stuck" in a prefix without good continuations >>> sequence_bias = [get_tokens("Trump"), -10.0] >>> biased_ids = model.generate(inputs["input_ids"], max_new_tokens=4, sequence_bias=sequence_bias) >>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0]) The full name of Donald is Donald J. Donald, >>> biased_ids = model.generate(inputs["input_ids"], max_new_tokens=4, num_beams=4, sequence_bias=sequence_bias) >>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0]) The full name of Donald is Donald Rumsfeld, >>> # We can also add a positive bias to nudge the model towards specific tokens or continuations >>> sequence_bias = [get_tokens("Donald Duck"), 10.0] >>> biased_ids = model.generate(inputs["input_ids"], max_new_tokens=4, num_beams=4, sequence_bias=sequence_bias) >>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0]) The full name of Donald is Donald Duck. ``` - __call__ [`SuppressTokensAtBeginLogitsProcessor`] supresses a list of tokens as soon as the `generate` function starts generating using `begin_index` tokens. This should ensure that the tokens defined by `begin_suppress_tokens` are not generated at the beginning. Originally created for [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper). Examples: ```python >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") >>> # Whisper has `begin_suppress_tokens` set by default (= `[220, 50256]`). 50256 is the EOS token, so this means >>> # it can't generate and EOS token in the first iteration, but it can in the others. >>> outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True) >>> print(outputs.scores[0][0, 50256]) tensor(-inf) >>> print(outputs.scores[-1][0, 50256]) # in other places we can see some probability mass for EOS tensor(29.9010) >>> # If we disable `begin_suppress_tokens`, we can generate EOS in the first iteration. >>> outputs = model.generate( ... **inputs, return_dict_in_generate=True, output_scores=True, begin_suppress_tokens=None ... ) >>> print(outputs.scores[0][0, 50256]) tensor(11.2027) ``` - __call__ This processor can be used to suppress a list of tokens. The processor will set their log probs to `-inf` so that they are not generated. Originally created for [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper). Examples: ```python >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") >>> # Whisper has a long list of suppressed tokens. For instance, in this case, the token 1 is suppressed by default. >>> outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True) >>> print(outputs.scores[1][0, 1]) # 1 (and not 0) is the first freely generated token tensor(-inf) >>> # If we disable `suppress_tokens`, we can generate it. >>> outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, suppress_tokens=None) >>> print(outputs.scores[1][0, 1]) tensor(6.0678) ``` - __call__ Logits processor that implements watermarking techniques for text generation models. This class facilitates the application of SynthID text watermarking, a method for embedding imperceptible signals into generated text to aid in detecting synthetic content. It operates by subtly manipulating the probabilities of token selection during text generation in a manner that can be reliably recovered later for verification. Key Features: * **State Management:** Maintains internal state to track token sequences and generate watermarking keys dynamically. * **Key Generation:** Computes hashes based on token sequences and watermarking parameters to create unique keys for each position. * **G-Value Sampling:** Employs a pre-computed sampling table to sample watermarking values (g-values) based on the generated keys. * **Score Adjustment:** Applies calculated g-values to modify token probabilities during generation, embedding the watermark. * **Context Repetition Handling:** Incorporates logic to avoid watermarking tokens in repeated contexts, preserving naturalness. * **EOS Token Masking:** Supports masking end-of-sentence tokens to prevent their inclusion in watermarking calculations. * **Utility Functions:** Provides functions to compute g-values directly, check for context repetition, create EOS token masks, and estimate expected mean g-values. Refer to paper url: https://www.nature.com/articles/s41586-024-08025-4 for more details around this. Args: ngram_len (`int`): Ngram length. keys (`List[int]`): A sequence of watermarking keys, one for each depth. sampling_table_size (`int`): Size of the sampling table. sampling_table_seed (`int`): Random seed to generate the sampling table. context_history_size (`int`): Size of the tensor to keep track of seen contexts. device (`torch.device`): Device to use. skip_first_ngram_calls (`bool`, *optional*, defaults to `False`): Whether to skip first ngram calls. debug_mode (`bool`, optional, *optional*, defaults to `False`): Logits are modified to uniform one got before watermarking modification is applied. This is to test the implementation. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, SynthIDTextWatermarkingConfig >>> tokenizer = AutoTokenizer.from_pretrained('google/gemma-2-2b', padding_side="left") >>> model = AutoModelForCausalLM.from_pretrained('google/gemma-2-2b') >>> # SynthID Text configuration >>> watermarking_config = SynthIDTextWatermarkingConfig( ... keys=[654, 400, 836, 123, 340, 443, 597, 160, 57], ... ngram_len=5, ... ) >>> # Generation with watermarking >>> tokenized_prompts = tokenizer(["Once upon a time, "], return_tensors="pt", padding=True) >>> output_sequences = model.generate( ... **tokenized_prompts, watermarking_config=watermarking_config, do_sample=True, max_new_tokens=10 ... ) >>> watermarked_text = tokenizer.batch_decode(output_sequences, skip_special_tokens=True) ``` - __call__ [`LogitsProcessor`] for temperature (exponential scaling output probability distribution), which effectively means that it can control the randomness of the predicted tokens. Often used together with [`TopPLogitsWarper`] and [`TopKLogitsWarper`]. <Tip> Make sure that `do_sample=True` is included in the `generate` arguments otherwise the temperature value won't have any effect. </Tip> Args: temperature (`float`): Strictly positive float value used to modulate the logits distribution. A value smaller than `1` decreases randomness (and vice versa), with `0` being equivalent to shifting all probability mass to the most likely token. Examples: ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(0) # for reproducibility >>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> model.config.pad_token_id = model.config.eos_token_id >>> inputs = tokenizer(["Hugging Face Company is"], return_tensors="pt") >>> # With temperature=1.0, the default, we consistently get random outputs due to random sampling. >>> generate_kwargs = {"max_new_tokens": 10, "do_sample": True, "temperature": 1.0, "num_return_sequences": 2} >>> outputs = model.generate(**inputs, **generate_kwargs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Hugging Face Company is one of these companies that is going to take a', "Hugging Face Company is a brand created by Brian A. O'Neil"] >>> # However, with temperature close to 0, it approximates greedy decoding strategies (invariant) >>> generate_kwargs["temperature"] = 0.0001 >>> outputs = model.generate(**inputs, **generate_kwargs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Hugging Face Company is a company that has been around for over 20 years', 'Hugging Face Company is a company that has been around for over 20 years'] ``` - __call__ [`LogitsProcessor`] that performs top-k, i.e. restricting to the k highest probability elements. Often used together with [`TemperatureLogitsWarper`] and [`TopPLogitsWarper`]. Args: top_k (`int`): The number of highest probability vocabulary tokens to keep for top-k-filtering. filter_value (`float`, *optional*, defaults to -inf): All filtered values will be set to this float value. min_tokens_to_keep (`int`, *optional*, defaults to 1): Minimum number of tokens that cannot be filtered. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(1) >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> inputs = tokenizer("A sequence: A, B, C, D", return_tensors="pt") >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample=True) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: A, B, C, D, E β€” S β€” O, P β€” R >>> # With `top_k` sampling, the output gets restricted the k most likely tokens. >>> # Pro tip: In practice, LLMs use `top_k` in the 5-50 range. >>> outputs = model.generate(**inputs, do_sample=True, top_k=2) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: A, B, C, D, E, F, G, H, I ``` - __call__ [`LogitsProcessor`] that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off. Often used together with [`TemperatureLogitsWarper`] and [`TopKLogitsWarper`]. Args: top_p (`float`): If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or higher are kept for generation. filter_value (`float`, *optional*, defaults to -inf): All filtered values will be set to this float value. min_tokens_to_keep (`int`, *optional*, defaults to 1): Minimum number of tokens that cannot be filtered. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(1) >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt") >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample=True) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: 1, 2, 3 | < 4 (left-hand pointer) ; <BLANKLINE> <BLANKLINE> >>> # With `top_p` sampling, the output gets restricted to high-probability tokens. >>> # Pro tip: In practice, LLMs use `top_p` in the 0.9-0.95 range. >>> outputs = model.generate(**inputs, do_sample=True, top_p=0.1) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9 ``` - __call__ [`LogitsProcessor`] that performs typical decoding. Inspired on how humans use language, it prioritizes tokens whose log probability is close to the entropy of the token probability distribution. This means that the most likely tokens may be discarded in the process. See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information. Args: mass (`float`, *optional*, defaults to 0.9): Value of typical_p between 0 and 1 inclusive, defaults to 0.9. filter_value (`float`, *optional*, defaults to -inf): All filtered values will be set to this float value. min_tokens_to_keep (`int`, *optional*, defaults to 1): Minimum number of tokens that cannot be filtered. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m") >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m") >>> inputs = tokenizer("1, 2, 3", return_tensors="pt") >>> # We can see that greedy decoding produces a sequence of numbers >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, >>> # For this particular seed, we can see that sampling produces nearly the same low-information (= low entropy) >>> # sequence >>> set_seed(18) >>> outputs = model.generate(**inputs, do_sample=True) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10 >>> # With `typical_p` set, the most obvious sequence is no longer produced, which may be good for your problem >>> set_seed(18) >>> outputs = model.generate( ... **inputs, do_sample=True, typical_p=0.1, return_dict_in_generate=True, output_scores=True ... ) >>> print(tokenizer.batch_decode(outputs.sequences, skip_special_tokens=True)[0]) 1, 2, 3 and 5 >>> # We can see that the token corresponding to "4" (token 934) in the second position, the most likely token >>> # as seen with greedy decoding, was entirely blocked out >>> print(outputs.scores[1][0, 934]) tensor(-inf) ``` - __call__ Logits processor for Classifier-Free Guidance (CFG). The processors computes a weighted average across scores from prompt conditional and prompt unconditional (or negative) logits, parameterized by the `guidance_scale`. The unconditional scores are computed internally by prompting `model` with the `unconditional_ids` branch. See [the paper](https://arxiv.org/abs/2306.17806) for more information. Args: guidance_scale (`float`): The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale != 1`. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. A value smaller than 1 has the opposite effect, while making the negative prompt provided with negative_prompt_ids (if any) act as a positive prompt. model (`PreTrainedModel`): The model computing the unconditional scores. Supposedly the same as the one computing the conditional scores. Both models must use the same tokenizer. unconditional_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices of input sequence tokens in the vocabulary for the unconditional branch. If unset, will default to the last token of the prompt. unconditional_attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Attention mask for unconditional_ids. use_cache (`bool`, *optional*, defaults to `True`): Whether to cache key/values during the negative prompt forward pass. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> inputs = tokenizer(["Today, a dragon flew over Paris, France,"], return_tensors="pt") >>> out = model.generate(inputs["input_ids"], guidance_scale=1.5) >>> tokenizer.batch_decode(out, skip_special_tokens=True)[0] 'Today, a dragon flew over Paris, France, killing at least 50 people and injuring more than 100' >>> # with a negative prompt >>> neg_inputs = tokenizer(["A very happy event happened,"], return_tensors="pt") >>> out = model.generate(inputs["input_ids"], guidance_scale=2, negative_prompt_ids=neg_inputs["input_ids"]) >>> tokenizer.batch_decode(out, skip_special_tokens=True)[0] 'Today, a dragon flew over Paris, France, killing at least 130 people. French media reported that' >>> # with a positive prompt >>> neg_inputs = tokenizer(["A very happy event happened,"], return_tensors="pt") >>> out = model.generate(inputs["input_ids"], guidance_scale=0, negative_prompt_ids=neg_inputs["input_ids"]) >>> tokenizer.batch_decode(out, skip_special_tokens=True)[0] "Today, a dragon flew over Paris, France, and I'm very happy to be here. I" ``` - __call__ [`LogitsProcessor`] that modifies the logits for the generation of timestamps in the transcription. When the input tokens are at a specific threshold, the processor sets the scores to negative infinity. The processor makes sure that timestamp tokens appear in pairs, by masking out the logits that would break this pairing pattern. This is done to maintain the consistency and structure of generated timestamps. It also ensures that when the predicted probability of sampling any of the timestamp token is greater than any individual non-timestamp token, those non-timestamp logits are set to negative infinity. This is done to ensure the generation of timestamps over other potential tokens. See [the paper](https://arxiv.org/abs/2212.04356) for more information. Args: generate_config (`GenerateConfig`): The generate config used to generate the output. The following parameters are required: eos_token_id (`int`, *optional*, defaults to 50257): The id of the *end-of-sequence* token. no_timestamps_token_id (`int`, *optional*, defaults to 50363): The id of the `"<|notimestamps|>"` token. max_initial_timestamp_index (`int`, *optional*, defaults to 1): Used to set the maximum value of the initial timestamp. This is used to prevent the model from predicting timestamps that are too far in the future. begin_index (`Optional`, *optional*): Token index of the first token that is generated by the model. _detect_timestamp_from_logprob (`bool`, *optional*): Whether timestamps can be predicted from logprobs over all timestamps. Examples: ``` python >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration, GenerationConfig >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[3]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> #Displaying timestamps >>> generated_ids = model.generate(inputs=input_features, return_timestamps=True) >>> transcription = processor.batch_decode(generated_ids, decode_with_timestamps=True)[0] >>> print("Transcription:", transcription) Transcription: <|startoftranscript|><|0.00|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all, and can<|6.44|><|6.44|> discover in it but little of rocky Ithaca.<|9.44|><|endoftext|> >>> #No timestamps & change EOS: >>> #This allows the user to select a specific token to terminate the sequence on, in this case it's the word "can"(460) >>> model.generation_config.eos_token_id = 460 >>> generated_ids = model.generate(inputs=input_features,return_timestamps=False) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print("Transcription:", transcription) Transcription: He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can ``` - __call__ Logits processor for watermarking generated text. The processor modifies model output scores by adding a small bias to randomized set of "green" tokens before generating the next token. "Green" tokens selection process depends on the `seeding_scheme` used. The code was based on the [original repo](https://github.com/jwkirchenbauer/lm-watermarking/tree/main). The text generated by this `LogitsProcessor` can be detected using `WatermarkDetector`. See [`~WatermarkDetector.__call__`] for details, See [the paper](https://arxiv.org/abs/2306.04634) for more information. Args: vocab_size (`int`): The model tokenizer's vocab_size. Used to calculate "green" tokens ratio. device (`str`): The device where model is allocated. greenlist_ratio (`float`, optional, *optional*, defaults to 0.25): The ratio of "green" tokens used to the vocabulary size. Defaults to 0.25. bias (`float`, optional, *optional*, defaults to 2.0): The bias added to the selected "green" tokens' logits. Consider lowering the `bias` if the text generation quality degrades. Recommended values are in the range of [0.5, 2.0]. Defaults to 2.0. hashing_key (`int`, optional, *optional*, defaults to 15485863): Key used for hashing. If you deploy this watermark, we advise using another private key. Defaults to 15485863 (the millionth prime). seeding_scheme (`str`, optional, *optional*, defaults to `"lefthash"`): The seeding scheme used for selecting "green" tokens. Accepts values: - "lefthash" (default): "green" tokens selection depend on the last token (Algorithm 2 from paper) - "selfhash": "green" tokens selection depends on the current token itself (Algorithm 3 from paper) The downside of this scheme is that it considers all possible next tokens and can be slower than "lefthash". The context length of previous tokens to use in seeding. Higher context length makes watermarking more robust. context_width (`int`, *optional*, defaults to 1): The number of previous tokens to use when setting the seed. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, WatermarkingConfig >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> inputs = tokenizer(["Alice and Bob are"], return_tensors="pt") >>> # normal generation >>> out = model.generate(inputs["input_ids"], max_length=20, do_sample=False) >>> tokenizer.batch_decode(out, skip_special_tokens=True)[0] 'Alice and Bob are both in the same room.\n\n"I\'m not sure if you\'re' >>> # watermarked generation >>> watermarking_config = WatermarkingConfig(bias=2.5, context_width=2, seeding_scheme="selfhash") >>> out = model.generate(inputs["input_ids"], watermarking_config=watermarking_config, max_length=20, do_sample=False) >>> tokenizer.batch_decode(out, skip_special_tokens=True)[0] 'Alice and Bob are both still alive and well and the story is pretty much a one-hour adventure' >>> # to detect watermarked text use the WatermarkDetector class >>> from transformers import WatermarkDetector >>> detector = WatermarkDetector(model_config=model.config, device="cpu", watermarking_config= watermarking_config) >>> detection_preds = detector(out) >>> detection_preds array([ True]) ``` - __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#pytorch
#pytorch
.md
427_7
TFForcedBOSTokenLogitsProcessor - __call__ TFForcedEOSTokenLogitsProcessor - __call__ TFForceTokensLogitsProcessor - __call__ TFLogitsProcessor - __call__ TFLogitsProcessorList - __call__ TFLogitsWarper - __call__ TFMinLengthLogitsProcessor - __call__ TFNoBadWordsLogitsProcessor - __call__ TFNoRepeatNGramLogitsProcessor - __call__ TFRepetitionPenaltyLogitsProcessor - __call__ TFSuppressTokensAtBeginLogitsProcessor - __call__ TFSuppressTokensLogitsProcessor - __call__ TFTemperatureLogitsWarper - __call__ TFTopKLogitsWarper - __call__ TFTopPLogitsWarper - __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#tensorflow
#tensorflow
.md
427_8
FlaxForcedBOSTokenLogitsProcessor - __call__ FlaxForcedEOSTokenLogitsProcessor - __call__ FlaxForceTokensLogitsProcessor - __call__ FlaxLogitsProcessor - __call__ FlaxLogitsProcessorList - __call__ FlaxLogitsWarper - __call__ FlaxMinLengthLogitsProcessor - __call__ FlaxSuppressTokensAtBeginLogitsProcessor - __call__ FlaxSuppressTokensLogitsProcessor - __call__ FlaxTemperatureLogitsWarper - __call__ FlaxTopKLogitsWarper - __call__ FlaxTopPLogitsWarper - __call__ FlaxWhisperTimeStampLogitsProcessor - __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#flax
#flax
.md
427_9
A [`StoppingCriteria`] can be used to change when to stop generation (other than EOS token). Please note that this is exclusively available to our PyTorch implementations. Abstract base class for all stopping criteria that can be applied during generation. If your stopping criteria depends on the `scores` input, make sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`. - __call__ Abstract base class for all stopping criteria that can be applied during generation. If your stopping criteria depends on the `scores` input, make sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`. List - __call__ This class can be used to stop generation whenever the full generated number of tokens exceeds `max_length`. Keep in mind for decoder-only type of transformers, this will include the initial prompted tokens. Args: max_length (`int`): The maximum length that the output sequence can have in number of tokens. max_position_embeddings (`int`, *optional*): The maximum model length, as defined by the model's `config.max_position_embeddings` attribute. - __call__ This class can be used to stop generation whenever the full generation exceeds some amount of time. By default, the time will start being counted when you initialize this function. You can override this by passing an `initial_time`. Args: max_time (`float`): The maximum allowed time in seconds for the generation. initial_time (`float`, *optional*, defaults to `time.time()`): The start of the generation allowed time. - __call__ This class can be used to stop generation whenever specific string sequences are generated. It preprocesses the strings together with the tokenizer vocab to find positions where tokens can validly complete the stop strings. Generation is stopped as soon as a token is generated that completes any of the stop strings. We want to catch any instance in which the stop string would be present in the decoded output, which means we must also catch cases with "overhangs" off one or both ends. To make this more concrete, for the stop string "stop", any of the following token sequences would trigger the match: - ["st", "op"] - ["stop"] - ["st", "opera"] - ["sto", "pper"] - ["las", "topper"] - ["s", "to", "pped"] Note that a match will only be triggered if the stop string is at the end of the generated sequence. In other words, these sequences will not trigger a match: - ["stop", "at"] - ["st", "op", "at"] - ["st", "opera", "tion"] The reason these are not a match is that the stop string does not overlap with the final token. If you can remove one or more tokens from the end of the sequence without destroying the stop string, then this criterion will not match that stop string. This is by design; because this check is run after each token is generated, we can't miss a valid stop string if one is generated, but we don't want to halt generation just because the stop string exists somewhere in the past input_ids. How is the match actually performed, though? We do it in quite a confusing way, because we want the entire match process to be compilable with Torch or XLA, which means we cannot use standard string methods. However, it is possible, with some work, to do string matching with pure tensor operations. We'll begin by describing the algorithm we use with standard string operations, and then at the end we'll explain how this is converted to pure tensor operations. The key to the algorithm is an observation: Because the stop string must overlap with the end of the token sequence, we can start at the end of the sequence and work backwards. Specifically, we check that there is an overlap between the start of the final token and the end of the stop_string, or to put it another way, stop_string[-i:] == token[:i] for some i > 0. If you look at the positive examples above, you'll see the last token in all of them fulfills this property: - ["st", "op"] (overlap is "op", overlap length == 2) - ["stop"] (overlap is "stop", overlap length == 4) - ["st", "opera"] (overlap is "op", overlap length == 2) - ["sto", "pper"] (overlap is "p", overlap length == 1) - ["las", "topper"] (overlap is "top", overlap length == 3) - ["s", "to", "pped"] (overlap is "p", overlap length == 1) It's impossible to construct a matching sequence that does not have this property (feel free to verify this yourself). However, although this overlap between the start of the final token and the end of the stop string is necessary for a match, it is not sufficient. We also need to check that the rest of the token sequence is consistent with the stop string. How do we do that? Let's use ["s", "to", "pped"] as an example. We know that the final token, "pped", has an overlap of 1 with the stop string, "stop". We then go back to the previous token, "to". Since we have already matched 1 character from the stop string, the remainder to check is "sto". We check that the next token "to" matches the end of the remainder, which it does. We have now matched 3 characters from the stop string, and the remainder to match is "s". We go back to the previous token again, which is also "s". This is a match, and so we have matched the entire stop string. How does it work when the tokens run off the start of the stop string, though? Let's consider the example of ["las", "topper"]. The final token, "topper", has an overlap of 3 with the stop string, "stop". Therefore, the remaining stop string to match is "s". We go back to the previous token, "las". Because the remainder to match is just "s", with length 1, we consider only the final 1 character from the token, which is "s". This matches the stop string, and so the entire string is matched. How do we compute these matches with tensor operations, though? Simply: we efficiently precompute the necessary information for all tokens! For every token, we compute: - Its overlap with the end of the stop string, if any - The positions inside the stop string where the token matches, including matches that run off the start. - The total length of the token For example, for the token "pped", we would compute an end overlap of 1, no internal matching positions, and a length of 4. For the token "to", we would compute no end overlap, a single internal matching position of 1 (counting from the end), and a length of 2. For the token "s", we would compute no end overlap, a single internal matching position of 3 (again counting from the end) and a length of 1. As long as we have this information, we can execute the algorithm above without any string comparison operations. We simply perform the following steps: - Check if the final token has an end-overlap with the start string - Continue backwards, keeping track of how much of the stop string we've matched so far - At each point, check if the next token has the current position as one of its valid positions - Continue until either a match fails, or we completely match the whole stop string Again, consider ["s", "to", "pped"] as an example. "pped" has an end overlap of 1, so we can begin a match. We have matched 1 character so far, so we check that the next token "to", has 1 as a valid position (again, counting from the end). It does, so we add the length of "to" to our position tracker. We have now matched 3 characters, so we check that the next token "s" has 3 as a valid position. It does, so we add its length to the position tracker. The position tracker is now 4, which is the length of the stop string. We have matched the entire stop string. In the second case, ["las", "topper"], "topper" has an end overlap of 3, so we can begin a match. We have matched 3 characters so far, so we check that the next token "las" has 3 as a valid position. It does, because we allow tokens to match positions that run off the start of the stop string. We add its length to the position tracker. The position tracker is now 6, which is greater than the length of the stop string! Don't panic, though - this also counts as a match of the stop string. We have matched the entire stop string. Args: tokenizer (`PreTrainedTokenizer`): The model's associated tokenizer (necessary to extract vocab and tokenize the termination sequences) stop_strings (`Union[str, List[str]]`): A list of strings that should end generation. If a string is passed, it will be treated like a list with a single element. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2") >>> model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2") >>> inputs = tokenizer("The biggest states in the USA by land area:", return_tensors="pt") >>> gen_out = model.generate(**inputs) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) The biggest states in the USA by land area: - Alaska - Texas - California >>> # Passing one or more stop strings will halt generation after those strings are emitted >>> # Note that generating with stop strings requires you to pass the tokenizer too >>> gen_out = model.generate(**inputs, stop_strings=["Texas"], tokenizer=tokenizer) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) The biggest states in the USA by land area: - Alaska - Texas ``` - __call__ This class can be used to stop generation whenever the "end-of-sequence" token is generated. By default, it uses the `model.generation_config.eos_token_id`. Args: eos_token_id (`Union[int, List[int], torch.Tensor]`): The id(s) of the *end-of-sequence* token. - __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#stoppingcriteria
#stoppingcriteria
.md
427_10
A [`Constraint`] can be used to force the generation to include specific tokens or sequences in the output. Please note that this is exclusively available to our PyTorch implementations. Abstract base class for all constraints that can be applied during generation. It must define how the constraint can be satisfied. All classes that inherit Constraint must follow the requirement that ```py completed = False while not completed: _, completed = constraint.update(constraint.advance()) ``` will always terminate (halt). [`Constraint`] enforcing that an ordered sequence of tokens is included in the output. Args: token_ids (`List[int]`): The id of the token that must be generated by the output. A special [`Constraint`] that is fulfilled by fulfilling just one of several constraints. Args: nested_token_ids (`List[List[int]]`): A list of words, where each word is a list of ids. This constraint is fulfilled by generating just one from the list of words. Abstract base class for all constraints that can be applied during generation. It must define how the constraint can be satisfied. All classes that inherit Constraint must follow the requirement that ```py completed = False while not completed: _, completed = constraint.update(constraint.advance()) ``` will always terminate (halt). ListState
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#constraints
#constraints
.md
427_11
Abstract base class for all beam scorers that are used for [`~PreTrainedModel.beam_search`] and [`~PreTrainedModel.beam_sample`]. - process - finalize [`BeamScorer`] implementing standard beam search decoding. Adapted in part from [Facebook's XLM beam search code](https://github.com/facebookresearch/XLM/blob/9e6f6814d17be4fe5b15f2e6c43eb2b2d76daeb4/src/model/transformer.py#L529). Reference for the diverse beam search algorithm and implementation [Ashwin Kalyan's DBS implementation](https://github.com/ashwinkalyan/dbs/blob/master/dbs/beam_utils.lua) Args: batch_size (`int`): Batch Size of `input_ids` for which standard beam search decoding is run in parallel. num_beams (`int`): Number of beams for beam search. device (`torch.device`): Defines the device type (*e.g.*, `"cpu"` or `"cuda"`) on which this instance of `BeamSearchScorer` will be allocated. length_penalty (`float`, *optional*, defaults to 1.0): Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while `length_penalty` < 0.0 encourages shorter sequences. do_early_stopping (`bool` or `str`, *optional*, defaults to `False`): Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: `True`, where the generation stops as soon as there are `num_beams` complete candidates; `False`, where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; `"never"`, where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm). num_beam_hyps_to_keep (`int`, *optional*, defaults to 1): The number of beam hypotheses that shall be returned upon calling [`~transformers.BeamSearchScorer.finalize`]. num_beam_groups (`int`, *optional*, defaults to 1): Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams. See [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details. max_length (`int`, *optional*): The maximum length of the sequence to be generated. - process - finalize [`BeamScorer`] implementing constrained beam search decoding. Args: batch_size (`int`): Batch Size of `input_ids` for which standard beam search decoding is run in parallel. num_beams (`int`): Number of beams for beam search. constraints (`List[Constraint]`): A list of positive constraints represented as `Constraint` objects that must be fulfilled in the generation output. For more information, the documentation of [`Constraint`] should be read. device (`torch.device`): Defines the device type (*e.g.*, `"cpu"` or `"cuda"`) on which this instance of `BeamSearchScorer` will be allocated. length_penalty (`float`, *optional*, defaults to 1.0): Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while `length_penalty` < 0.0 encourages shorter sequences. do_early_stopping (`bool` or `str`, *optional*, defaults to `False`): Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: `True`, where the generation stops as soon as there are `num_beams` complete candidates; `False`, where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; `"never"`, where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm). num_beam_hyps_to_keep (`int`, *optional*, defaults to 1): The number of beam hypotheses that shall be returned upon calling [`~transformers.BeamSearchScorer.finalize`]. num_beam_groups (`int`, *optional*, defaults to 1): Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams. See [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details. max_length (`int`, *optional*): The maximum length of the sequence to be generated. - process - finalize
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#beamsearch
#beamsearch
.md
427_12
Simple text streamer that prints the token(s) to stdout as soon as entire words are formed. <Tip warning={true}> The API for the streamer classes is still under development and may change in the future. </Tip> Parameters: tokenizer (`AutoTokenizer`): The tokenized used to decode the tokens. skip_prompt (`bool`, *optional*, defaults to `False`): Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots. decode_kwargs (`dict`, *optional*): Additional keyword arguments to pass to the tokenizer's `decode` method. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextStreamer(tok) >>> # Despite returning the usual output, the streamer will also print the generated text to stdout. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ``` Streamer that stores print-ready text in a queue, to be used by a downstream application as an iterator. This is useful for applications that benefit from acessing the generated text in a non-blocking way (e.g. in an interactive Gradio demo). <Tip warning={true}> The API for the streamer classes is still under development and may change in the future. </Tip> Parameters: tokenizer (`AutoTokenizer`): The tokenized used to decode the tokens. skip_prompt (`bool`, *optional*, defaults to `False`): Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots. timeout (`float`, *optional*): The timeout for the text queue. If `None`, the queue will block indefinitely. Useful to handle exceptions in `.generate()`, when it is called in a separate thread. decode_kwargs (`dict`, *optional*): Additional keyword arguments to pass to the tokenizer's `decode` method. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer >>> from threading import Thread >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextIteratorStreamer(tok) >>> # Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way. >>> generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=20) >>> thread = Thread(target=model.generate, kwargs=generation_kwargs) >>> thread.start() >>> generated_text = "" >>> for new_text in streamer: ... generated_text += new_text >>> generated_text 'An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,' ``` Streamer that stores print-ready text in a queue, to be used by a downstream application as an async iterator. This is useful for applications that benefit from acessing the generated text asynchronously (e.g. in an interactive Gradio demo). <Tip warning={true}> The API for the streamer classes is still under development and may change in the future. </Tip> Parameters: tokenizer (`AutoTokenizer`): The tokenized used to decode the tokens. skip_prompt (`bool`, *optional*, defaults to `False`): Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots. timeout (`float`, *optional*): The timeout for the text queue. If `None`, the queue will block indefinitely. Useful to handle exceptions in `.generate()`, when it is called in a separate thread. decode_kwargs (`dict`, *optional*): Additional keyword arguments to pass to the tokenizer's `decode` method. Raises: TimeoutError: If token generation time exceeds timeout value. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, AsyncTextIteratorStreamer >>> from threading import Thread >>> import asyncio >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> # Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way. >>> async def main(): ... # Important: AsyncTextIteratorStreamer must be initialized inside a coroutine! ... streamer = AsyncTextIteratorStreamer(tok) ... generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=20) ... thread = Thread(target=model.generate, kwargs=generation_kwargs) ... thread.start() ... generated_text = "" ... async for new_text in streamer: ... generated_text += new_text >>> print(generated_text) >>> asyncio.run(main()) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#streamers
#streamers
.md
427_13
Base, abstract class for all caches. The actual data structure is specific to each subclass. - update Base, abstract class for all caches. The actual data structure is specific to each subclass. Config - update Configuration class for quantized cache settings. Attributes: backend (`str`, *optional*, defaults to `"quanto"`): Backend to use when performing quantization, Can be one of [`quanto`, `HQQ`] nbits (`Optional[int]`, *optional*, defaults to 4): Number of bits, can be 2 or 4 for the `quanto` backend and one of [1, 2, 3, 4, 8] for the `HQQ` backend. Defaults to 2. axis_key (`int`, *optional*, defaults to 0): Axis over which to perform grouping for the key tensors. Can be [0, -1] for `quanto` backend and [0, 1] for `HQQ` backend. axis_value (`int`, *optional*, defaults to 0): Axis over which to perform grouping for the value tensors. Can be [0, -1] for `quanto` backend and [0, 1] for `HQQ` backend. q_group_size (`Optional[int]`, *optional*, defaults to 64): Size of the quantization group, should be a divisor of the model's hidden dimension. Defaults to 64. residual_length (`Optional[int]`, *optional*, defaults to 128): Length of the residual cache which will always be stored in original presicion. Defaults to 128. compute_dtype (`torch.dtype`, *optional*, defaults to `torch.float16`): The defualt dtype used for computations in the model. Keys and Values will be cast to this dtype after dequantization. device (`str`, *optional*, defaults to `"cpu"`): Device on which to perform computations, should be same as the model's device. - validate A cache that grows dynamically as more tokens are generated. This is the default for generative models. It stores the Key and Value states as a list of tensors, one for each layer. The expected shape for each tensor is `[batch_size, num_heads, seq_len, head_dim]`. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, DynamicCache >>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> inputs = tokenizer(text="My name is Qwen2", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> past_key_values = DynamicCache() >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation DynamicCache() ``` - update - get_seq_length - reorder_cache - to_legacy_cache - from_legacy_cache A quantizer cache similar to what is described in the [KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache paper](https://arxiv.org/abs/2402.02750). It allows the model to generate longer sequence length without allocating too much memory for Key and Value cache by applying quantization. The cache has two types of storage, one for original precision and one for the quantized cache. A `residual length` is set as a maximum capacity for the original precision cache. When the length goes beyond maximum capacity, the original precision cache is discarded and moved into the quantized cache. The quantization is done per-channel with a set `q_group_size` for both Keys and Values, in contrast to what was described in the paper. It stores Keys and Values a list of quantized tensors (tuples in case we need to store metadata), one for each layer. Additionally, it stores the Key and Value in original precision states as a list of tensors, one for each layer. The size of each tensor is `[batch_size, num_heads, seq_len - residual_length, head_dim]` - update - get_seq_length Quantized Cache class that uses `quanto` as a backend to perform quantization. Current implementation supports `int2` and `int4` dtypes only. Parameters: cache_config (`QuantizedCacheConfig`): A configuration containing all the arguments to be used by the quantizer, including axis, qtype and group size. Example: ```python >>> # Run pip install quanto first if you don't have it yet >>> from transformers import AutoTokenizer, AutoModelForCausalLM, QuantoQuantizedCache, QuantizedCacheConfig >>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> inputs = tokenizer(text="My name is Qwen2", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> cache_config = QuantizedCacheConfig(nbits=4) >>> past_key_values = QuantoQuantizedCache(cache_config=cache_config) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation QuantoQuantizedCache() ``` Quantized Cache class that uses `HQQ` as a backend to perform quantization. Current implementation supports `int2`, `int4`, `int8` dtypes. Parameters: cache_config (`QuantizedCacheConfig`): A configuration containing all the arguments to be used by the quantizer, including axis, qtype and group size. Example: ```python >>> # Run pip install hqq first if you don't have it yet >>> from transformers import AutoTokenizer, AutoModelForCausalLM, HQQQuantizedCache, QuantizedCacheConfig >>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> inputs = tokenizer(text="My name is Qwen2", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> cache_config = QuantizedCacheConfig(nbits=4, axis_key=1, axis_value=1) >>> past_key_values = HQQQuantizedCache(cache_config=cache_config) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation HQQQuantizedCache() ``` A cache that as described in the [Attention Sinks paper](https://arxiv.org/abs/2309.17453). It allows the model to generate beyond the length of its context window, without losing fluency in the conversation. As it discards past tokens, the model will lose the ability to generate tokens that depend on the context that was discarded. It stores the Key and Value states as a list of tensors, one for each layer. The expected shape for each tensor is `[batch_size, num_heads, seq_len, head_dim]`. Parameters: window_length (`int`): The length of the context window. num_sink_tokens (`int`): The number of sink tokens. See the original paper for more information. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, SinkCache >>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct") >>> inputs = tokenizer(text="My name is Qwen2", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> past_key_values = SinkCache(window_length=256, num_sink_tokens=4) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation SinkCache() ``` - update - get_seq_length - reorder_cache A drop-in replacement for DynamicCache that conserves GPU memory at the expense of more CPU memory. Useful for generating from models with very long context. In addition to the default CUDA stream, where all forward() computations happen, this class uses another stream, the prefetch stream, which it creates itself. Since scheduling of operations on separate streams happens independently, this class uses the prefetch stream to asynchronously prefetch the KV cache of layer k+1 when layer k is executing. The movement of the layer k-1 cache to the CPU is handled by the default stream as a simple way to ensure the eviction is scheduled after all computations on that cache are finished. - update - prefetch_layer - evict_previous_layer Static Cache class to be used with `torch.compile(model)` and `torch.export()`. Parameters: config (`PretrainedConfig`): The configuration file defining the shape-related attributes required to initialize the static cache. batch_size (`int`): The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. If you are manually setting the batch size, make sure to take into account the number of beams if you are running beam search max_cache_len (`int`): The maximum sequence length with which the model will be used. device (`torch.device` or `str`): The device on which the cache should be initialized. Should be the same as the layer. dtype (`torch.dtype`, *optional*, defaults to `torch.float32`): The default `dtype` to use when initializing the layer. layer_device_map(`Dict[int, Union[str, torch.device, int]]]`, `optional`): Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus. You can know which layers mapped to which device by checking the associated device_map: `model.hf_device_map`. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, StaticCache >>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf") >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") >>> inputs = tokenizer(text="My name is Llama", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[1] + 10 >>> past_key_values = StaticCache(config=model.config, batch_size=1, max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation StaticCache() ``` - update - get_seq_length - reset Static cache class to be used with `torch.compile(model)` that offloads to the CPU or another device. Args: config (`PretrainedConfig): The configuration file defining the shape-related attributes required to initialize the static cache. max_batch_size (`int`): The maximum batch size with which the model will be used. max_cache_len (`int`): The maximum sequence length with which the model will be used. device (`Union[str, torch.device]`): The device on which the cache should be initialized. Should be the same as the layer device. dtype (`torch.dtype`, *optional*): The default `dtype` to use when initializing the cache. offload_device (`Union[str, torch.device]`, *optional*, defaults to `cpu`): The device to offload to. Defaults to CPU. layer_device_map (`Dict[int, Union[str, torch.device, int]]`, *optional*): Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus. You can know which layers mapped to which device by checking the associated device_map: `model.hf_device_map`. Attributes: key_cache (`List[torch.Tensor]`): Off-loaded key cache tensors. First one will be on device, where-as the others are off-loaded. value_cache (`List[torch.Tensor]`): Off-loaded value cache tensors. First one will be on device, where-as the others are off-loaded. max_batch_size (`int`): The maximum batch size with which this cache can be used. max_cache_len (`int`): The maximum sequence length with which this cache can be used. device (`torch.device`): The device on which the cache is used. offload_device (`torch.device`): The device used to offload to. dtype (`torch.dtype`): The `dtype` used to initializing the cache. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, OffloadedStaticCache >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> inputs = tokenizer(text="My name is GPT2", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[1] + 10 >>> past_key_values = OffloadedStaticCache(config=model.config, max_batch_size=1, max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> past_kv_length = outputs.past_key_values # access cache filled with key/values from generation ``` - update - get_seq_length - reset Hybrid Cache class to be used with `torch.compile` for Gemma2 models that alternate between a local sliding window attention and global attention in every other layer. Under the hood, Hybrid Cache leverages ["SlidingWindowCache"] for sliding window attention and ["StaticCache"] for global attention. For more information, see the documentation of each subcomponeent cache class. Parameters: config (`PretrainedConfig): The configuration file defining the shape-related attributes required to initialize the static cache. batch_size (`int`): The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. max_cache_len (`int`): The maximum sequence length with which the model will be used. device (`torch.device` or `str`, *optional*, defaults to `"cpu"`): The device on which the cache should be initialized. Should be the same as the layer. dtype (torch.dtype, *optional*, defaults to `torch.float32`): The default `dtype` to use when initializing the layer. layer_device_map(`Dict[int, Union[str, torch.device, int]]]`, `optional`): Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus. You can know which layers mapped to which device by checking the associated device_map: `model.hf_device_map`. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, HybridCache >>> model = AutoModelForCausalLM.from_pretrained("google/gemma-2-2b") >>> tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") >>> inputs = tokenizer(text="My name is Gemma", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[1] + 10 >>> past_key_values = HybridCache(config=model.config, batch_size=1, max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation HybridCache() ``` - update - get_seq_length - reset Sliding Window Cache class to be used with `torch.compile` for models like Mistral that support sliding window attention. Every time when we try to update the cache, we compute the `indices` based on `cache_position >= self.config.sliding_window - 1`, if true(which means the cache can not hold all the old key value states and new states together because of the sliding window constraint), we need to do a cycle shift based on `indices` to replace the oldest states by the new key value states passed in. The `to_shift` is only true once we are above sliding_window. Thus with `sliding_window==64`: indices = (slicing + to_shift[-1].int()-1) % self.config.sliding_window tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 0]) We overwrite the cache using these, then we always write at cache_position (clamped to `sliding_window`) Parameters: config (`PretrainedConfig`): The configuration file defining the shape-related attributes required to initialize the static cache. batch_size (`int`): The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. max_cache_len (`int`): The maximum sequence length with which the model will be used. device (`torch.device` or `str`): The device on which the cache should be initialized. Should be the same as the layer. dtype (`torch.dtype`, *optional*, defaults to `torch.float32`): The default `dtype` to use when initializing the layer. layer_device_map(`Dict[int, Union[str, torch.device, int]]]`, `optional`): Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus. You can know which layers mapped to which device by checking the associated device_map: `model.hf_device_map`. Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, SlidingWindowCache >>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3") >>> inputs = tokenizer(text="My name is Mistral", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[1] + 10 >>> past_key_values = SlidingWindowCache(config=model.config, batch_size=1, max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation SlidingWindowCache() ``` - update - reset Base, abstract class for all encoder-decoder caches. Can be used to hold combinations of self-attention and cross-attention caches. Example: ```python >>> from transformers import AutoProcessor, AutoModelForCausalLM, DynamicCache, EncoderDecoderCache >>> model = AutoModelForCausalLM.from_pretrained("openai/whisper-small") >>> processor = AutoProcessor.from_pretrained("openai/whisper-small") >>> inputs = processor(audio=YOUR-AUDIO, return_tensors="pt") >>> # Prepare cache classes for encoder and decoder and pass it to model's forward >>> self_attention_cache = DynamicCache() >>> cross_attention_cache = DynamicCache() >>> past_key_values = EncoderDecoderCache(self_attention_cache, cross_attention_cache) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values # access cache filled with key/values from generation EncoderDecoderCache() ``` - get_seq_length - to_legacy_cache - from_legacy_cache - reset - reorder_cache Cache for mamba model which does not have attention mechanism and key value states. Arguments: config (`PretrainedConfig): The configuration file defining the shape-related attributes required to initialize the static cache. batch_size (`int`): The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. dtype (`torch.dtype`, *optional*, defaults to `torch.float16`): The default `dtype` to use when initializing the layer. device (`torch.device` or `str`, *optional*): The device on which the cache should be initialized. Should be the same as the layer. Attributes: dtype: (`torch.dtype`): The default `dtype` used to initializing the cache. intermediate_size: (`int`): Model's intermediate_size taken from config. ssm_state_size: (`int`): Model's state_size taken from config. conv_kernel_size: (`int`): Model's convolution kernel size taken from config conv_states: (`torch.Tensor`): A tensor of shape `[layer_idx, batch_size, intermediate_size, conv_kernel_size]` that holds convolutional states. ssm_states: (`torch.Tensor`): A tensor of shape `[layer_idx, batch_size, intermediate_size, ssm_state_size]` that holds ssm states Example: ```python >>> from transformers import AutoTokenizer, MambaForCausalLM, MambaCache >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") >>> inputs = tokenizer(text="My name is Mamba", return_tensors="pt") >>> # Prepare a cache class and pass it to model's forward >>> past_key_values = MambaCache(config=model.config, batch_size=1, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True) >>> outputs.past_key_values MambaCache() ``` - update_conv_state - update_ssm_state - reset
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#caches
#caches
.md
427_14
Class that holds arguments for watermark generation and should be passed into `GenerationConfig` during `generate`. See [this paper](https://arxiv.org/abs/2306.04634) for more details on the arguments. Accepts the following keys: - greenlist_ratio (`float`): Used for watermarking. The ratio of "green" tokens used to the vocabulary size. Defaults to 0.25. - bias (`float`): Used with watermarking. The bias added to the selected "green" tokens' logits. Defaults to 2.0. - hashing_key (`int`): Hashing key used for watermarking. Defaults to 15485863 (the millionth prime). - seeding_scheme (`str`): Algorithm to use for watermarking. Accepts values: - "lefthash" (default): "green" tokens selection depend on the last token (Algorithm 2 from the paper) - "selfhash": "green" tokens selection depends on the current token itself (Algorithm 3 from the paper) The downside of this scheme is that it considers all possible next tokens and can be slower than "lefthash". - context_width(`int`): The context length of previous tokens to use in seeding. Higher context length makes watermarking more robust. - __call__ Detector for detection of watermark generated text. The detector needs to be given the exact same settings that were given during text generation to replicate the watermark greenlist generation and so detect the watermark. This includes the correct device that was used during text generation, the correct watermarking arguments and the correct tokenizer vocab size. The code was based on the [original repo](https://github.com/jwkirchenbauer/lm-watermarking/tree/main). See [the paper](https://arxiv.org/abs/2306.04634) for more information. Args: model_config (`PretrainedConfig`): The model config that will be used to get model specific arguments used when generating. device (`str`): The device which was used during watermarked text generation. watermarking_config (Union[`WatermarkingConfig`, `Dict`]): The exact same watermarking config and arguments used when generating text. ignore_repeated_ngrams (`bool`, *optional*, defaults to `False`): Whether to count every unique ngram only once or not. max_cache_size (`int`, *optional*, defaults to 128): The max size to be used for LRU caching of seeding/sampling algorithms called for every token. Examples: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, WatermarkDetector, WatermarkingConfig >>> model_id = "openai-community/gpt2" >>> model = AutoModelForCausalLM.from_pretrained(model_id) >>> tok = AutoTokenizer.from_pretrained(model_id) >>> tok.pad_token_id = tok.eos_token_id >>> tok.padding_side = "left" >>> inputs = tok(["This is the beginning of a long story", "Alice and Bob are"], padding=True, return_tensors="pt") >>> input_len = inputs["input_ids"].shape[-1] >>> # first generate text with watermark and without >>> watermarking_config = WatermarkingConfig(bias=2.5, seeding_scheme="selfhash") >>> out_watermarked = model.generate(**inputs, watermarking_config=watermarking_config, do_sample=False, max_length=20) >>> out = model.generate(**inputs, do_sample=False, max_length=20) >>> # now we can instantiate the detector and check the generated text >>> detector = WatermarkDetector(model_config=model.config, device="cpu", watermarking_config=watermarking_config) >>> detection_out_watermarked = detector(out_watermarked, return_dict=True) >>> detection_out = detector(out, return_dict=True) >>> detection_out_watermarked.prediction array([ True, True]) >>> detection_out.prediction array([False, False]) ``` - __call__ This is the configuration class to store the configuration of a [`BayesianDetectorModel`]. It is used to instantiate a Bayesian Detector model according to the specified arguments. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: watermarking_depth (`int`, *optional*): The number of tournament layers. base_rate (`float1`, *optional*, defaults to 0.5): Prior probability P(w) that a text is watermarked. Bayesian classifier for watermark detection. This detector uses Bayes' rule to compute a watermarking score, which is the sigmoid of the log of ratio of the posterior probabilities P(watermarked|g_values) and P(unwatermarked|g_values). Please see the section on BayesianScore in the paper for further details. Paper URL: https://www.nature.com/articles/s41586-024-08025-4 Note that this detector only works with non-distortionary Tournament-based watermarking using the Bernoulli(0.5) g-value distribution. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`BayesianDetectorConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. - forward Class that holds arguments for watermark generation and should be passed into `GenerationConfig` during `generate`. See [this paper](https://www.nature.com/articles/s41586-024-08025-4) for more details on the arguments. Args: ngram_len (`int`): Ngram length. keys (`List[int]`): A sequence of watermarking keys, one for each depth. context_history_size (`int`, *optional*, defaults to 1024): Size of the tensor to keep track of seen contexts. sampling_table_seed (`int`, *optional*, defaults to 0): Random seed to generate the sampling table. sampling_table_size (`int`, *optional*, defaults to 65536): Size of the sampling table. skip_first_ngram_calls (`bool`, *optional*, defaults to `False`): Whether to skip first ngram calls. debug_mode (`bool`, optional, *optional*, defaults to `False`): Logits are modified to uniform one got before watermarking modification is applied. This is to test the implementation. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, SynthIDTextWatermarkingConfig >>> tokenizer = AutoTokenizer.from_pretrained('google/gemma-2-2b', padding_side="left") >>> model = AutoModelForCausalLM.from_pretrained('google/gemma-2-2b') >>> # SynthID Text configuration >>> watermarking_config = SynthIDTextWatermarkingConfig( ... keys=[654, 400, 836, 123, 340, 443, 597, 160, 57], ... ngram_len=5, ... ) >>> # Generation with watermarking >>> tokenized_prompts = tokenizer(["Once upon a time, "], return_tensors="pt", padding=True) >>> output_sequences = model.generate( ... **tokenized_prompts, watermarking_config=watermarking_config, do_sample=True, max_new_tokens=10 ... ) >>> watermarked_text = tokenizer.batch_decode(output_sequences, skip_special_tokens=True) ``` SynthID text watermark detector class. This class has to be initialized with the trained bayesian detector module check script in examples/synthid_text/detector_training.py for example in training/saving/loading this detector module. The folder also showcases example use case of this detector. Parameters: detector_module ([`BayesianDetectorModel`]): Bayesian detector module object initialized with parameters. Check examples/research_projects/synthid_text/detector_training.py for usage. logits_processor (`SynthIDTextWatermarkLogitsProcessor`): The logits processor used for watermarking. tokenizer (`Any`): The tokenizer used for the model. Examples: ```python >>> from transformers import ( ... AutoTokenizer, BayesianDetectorModel, SynthIDTextWatermarkLogitsProcessor, SynthIDTextWatermarkDetector ... ) >>> # Load the detector. See examples/research_projects/synthid_text for training a detector. >>> detector_model = BayesianDetectorModel.from_pretrained("joaogante/dummy_synthid_detector") >>> logits_processor = SynthIDTextWatermarkLogitsProcessor( ... **detector_model.config.watermarking_config, device="cpu" ... ) >>> tokenizer = AutoTokenizer.from_pretrained(detector_model.config.model_name) >>> detector = SynthIDTextWatermarkDetector(detector_model, logits_processor, tokenizer) >>> # Test whether a certain string is watermarked >>> test_input = tokenizer(["This is a test input"], return_tensors="pt") >>> is_watermarked = detector(test_input.input_ids) ``` - __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#watermark-utils
#watermark-utils
.md
427_15
Class that holds arguments relative to `torch.compile` behavior, when using automatic compilation in `generate`. See [`torch.compile`](https://pytorch.org/docs/stable/generated/torch.compile.html) for more details on the arguments. Args: fullgraph (`bool`, *optional*, defaults to `True`): If `True`, requires that the whole forward be capturable in a single graph. dynamic (`bool` or `None`, *optional*): Whether to try to use dynamic shape graphs. backend (`str` or `Callable`, *optional*, defaults to `"inductor"`): Backend to be used. mode (`str`, *optional*, defaults to `"reduce-overhead"`): Controls balance between performance and overhead. options (`dict`, *optional*): A dictionary of options to pass to the backend. Examples: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, CompileConfig >>> tokenizer = AutoTokenizer.from_pretrained('google/gemma-2-2b') >>> model = AutoModelForCausalLM.from_pretrained('google/gemma-2-2b').cuda() >>> # Automatic compile configuration, used with static cache >>> compile_config = CompileConfig(dynamic=True) >>> # Generation with static cache and compile config >>> input = tokenizer.encode("Hello there, how", return_tensors="pt").cuda() >>> output = model.generate( ... input, do_sample=False, max_new_tokens=300, cache_implementation="static", compile_config=compile_config ... ) >>> output_text = tokenizer.batch_decode(output, skip_special_tokens=True)[0] ``` - __call__
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/generation_utils.md
https://huggingface.co/docs/transformers/en/internal/generation_utils/#compile-utils
#compile-utils
.md
427_16
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/time_series_utils.md
https://huggingface.co/docs/transformers/en/internal/time_series_utils/
.md
428_0
This page lists all the utility functions and classes that can be used for Time Series based models. Most of those are only useful if you are studying the code of the time series models or you wish to add to the collection of distributional output classes.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/time_series_utils.md
https://huggingface.co/docs/transformers/en/internal/time_series_utils/#time-series-utilities
#time-series-utilities
.md
428_1
time_series_utils.NormalOutput Normal distribution output class. time_series_utils.StudentTOutput Student-T distribution output class. time_series_utils.NegativeBinomialOutput Negative Binomial distribution output class.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/time_series_utils.md
https://huggingface.co/docs/transformers/en/internal/time_series_utils/#distributional-output
#distributional-output
.md
428_2
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/
.md
429_0
This page lists all the utility functions used by the tokenizers, mainly the class [`~tokenization_utils_base.PreTrainedTokenizerBase`] that implements the common methods between [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] and the mixin [`~tokenization_utils_base.SpecialTokensMixin`]. Most of those are only useful if you are studying the code of the tokenizers in the library.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#utilities-for-tokenizers
#utilities-for-tokenizers
.md
429_1
tokenization_utils_base.PreTrainedTokenizerBase Base class for [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`]. Handles shared (mostly boiler plate) methods for those two classes. Class attributes (overridden by derived classes) - **vocab_files_names** (`Dict[str, str]`) -- A dictionary with, as keys, the `__init__` keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - **pretrained_vocab_files_map** (`Dict[str, Dict[str, str]]`) -- A dictionary of dictionaries, with the high-level keys being the `__init__` keyword name of each vocabulary file required by the model, the low-level being the `short-cut-names` of the pretrained models with, as associated values, the `url` to the associated pretrained vocabulary file. - **model_input_names** (`List[str]`) -- A list of inputs expected in the forward pass of the model. - **padding_side** (`str`) -- The default value for the side on which the model should have padding applied. Should be `'right'` or `'left'`. - **truncation_side** (`str`) -- The default value for the side on which the model should have truncation applied. Should be `'right'` or `'left'`. Args: model_max_length (`int`, *optional*): The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with [`~tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`], this will be set to the value stored for the associated model in `max_model_input_sizes` (see above). If no value is provided, will default to VERY_LARGE_INTEGER (`int(1e30)`). padding_side (`str`, *optional*): The side on which the model should have padding applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. truncation_side (`str`, *optional*): The side on which the model should have truncation applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. chat_template (`str`, *optional*): A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description. model_input_names (`List[string]`, *optional*): The list of inputs accepted by the forward pass of the model (like `"token_type_ids"` or `"attention_mask"`). Default value is picked from the class attribute of the same name. bos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the beginning of a sentence. Will be associated to `self.bos_token` and `self.bos_token_id`. eos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the end of a sentence. Will be associated to `self.eos_token` and `self.eos_token_id`. unk_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing an out-of-vocabulary token. Will be associated to `self.unk_token` and `self.unk_token_id`. sep_token (`str` or `tokenizers.AddedToken`, *optional*): A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to `self.sep_token` and `self.sep_token_id`. pad_token (`str` or `tokenizers.AddedToken`, *optional*): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to `self.pad_token` and `self.pad_token_id`. cls_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the class of the input (used by BERT for instance). Will be associated to `self.cls_token` and `self.cls_token_id`. mask_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to `self.mask_token` and `self.mask_token_id`. additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*): A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with `skip_special_tokens` is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process. split_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the special tokens should be split during the tokenization process. Passing will affect the internal state of the tokenizer. The default behavior is to not split special tokens. This means that if `<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if `split_special_tokens=True`, then `tokenizer.tokenize("<s>")` will be give `['<','s', '>']`. - __call__ - all
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#pretrainedtokenizerbase
#pretrainedtokenizerbase
.md
429_2
tokenization_utils_base.SpecialTokensMixin A mixin derived by [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] to handle specific behaviors related to special tokens. In particular, this class hold the attributes which can be used to directly access these special tokens in a model-independent manner and allow to set and update the special tokens. Args: bos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the beginning of a sentence. eos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the end of a sentence. unk_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing an out-of-vocabulary token. sep_token (`str` or `tokenizers.AddedToken`, *optional*): A special token separating two different sentences in the same input (used by BERT for instance). pad_token (`str` or `tokenizers.AddedToken`, *optional*): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. cls_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the class of the input (used by BERT for instance). mask_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*): A tuple or a list of additional tokens, which will be marked as `special`, meaning that they will be skipped when decoding if `skip_special_tokens` is set to `True`.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#specialtokensmixin
#specialtokensmixin
.md
429_3
tokenization_utils_base.TruncationStrategy Possible values for the `truncation` argument in [`PreTrainedTokenizerBase.__call__`]. Useful for tab-completion in an IDE. tokenization_utils_base.CharSpan Character span in the original string. Args: start (`int`): Index of the first character in the original string. end (`int`): Index of the character following the last character in the original string. tokenization_utils_base.TokenSpan Token span in an encoded string (list of tokens). Args: start (`int`): Index of the first token in the span. end (`int`): Index of the token following the last token in the span.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/tokenization_utils.md
https://huggingface.co/docs/transformers/en/internal/tokenization_utils/#enums-and-namedtuples
#enums-and-namedtuples
.md
429_4
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/
.md
430_0
This page lists all the utility functions used by the image processors, mainly the functional transformations used to process the images. Most of those are only useful if you are studying the code of the image processors in the library.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#utilities-for-image-processors
#utilities-for-image-processors
.md
430_1
image_transforms.center_crop Crops the `image` to the specified `size` using a center crop. Note that if the image is too small to be cropped to the size given, it will be padded (so the returned result will always be of size `size`). Args: image (`np.ndarray`): The image to crop. size (`Tuple[int, int]`): The target size for the cropped image. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the input image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image. return_numpy (`bool`, *optional*): Whether or not to return the cropped image as a numpy array. Used for backwards compatibility with the previous ImageFeatureExtractionMixin method. - Unset: will return the same type as the input image. - `True`: will return a numpy array. - `False`: will return a `PIL.Image.Image` object. Returns: `np.ndarray`: The cropped image. image_transforms.center_to_corners_format Converts bounding boxes from center format to corners format. center format: contains the coordinate for the center of the box and its width, height dimensions (center_x, center_y, width, height) corners format: contains the coodinates for the top-left and bottom-right corners of the box (top_left_x, top_left_y, bottom_right_x, bottom_right_y) image_transforms.corners_to_center_format Converts bounding boxes from corners format to center format. corners format: contains the coordinates for the top-left and bottom-right corners of the box (top_left_x, top_left_y, bottom_right_x, bottom_right_y) center format: contains the coordinate for the center of the box and its the width, height dimensions (center_x, center_y, width, height) image_transforms.id_to_rgb Converts unique ID to RGB color. image_transforms.normalize Normalizes `image` using the mean and standard deviation specified by `mean` and `std`. image = (image - mean) / std Args: image (`np.ndarray`): The image to normalize. mean (`float` or `Iterable[float]`): The mean to use for normalization. std (`float` or `Iterable[float]`): The standard deviation to use for normalization. data_format (`ChannelDimension`, *optional*): The channel dimension format of the output image. If unset, will use the inferred format from the input. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If unset, will use the inferred format from the input. image_transforms.pad Pads the `image` with the specified (height, width) `padding` and `mode`. Args: image (`np.ndarray`): The image to pad. padding (`int` or `Tuple[int, int]` or `Iterable[Tuple[int, int]]`): Padding to apply to the edges of the height, width axes. Can be one of three formats: - `((before_height, after_height), (before_width, after_width))` unique pad widths for each axis. - `((before, after),)` yields same before and after pad for height and width. - `(pad,)` or int is a shortcut for before = after = pad width for all axes. mode (`PaddingMode`): The padding mode to use. Can be one of: - `"constant"`: pads with a constant value. - `"reflect"`: pads with the reflection of the vector mirrored on the first and last values of the vector along each axis. - `"replicate"`: pads with the replication of the last value on the edge of the array along each axis. - `"symmetric"`: pads with the reflection of the vector mirrored along the edge of the array. constant_values (`float` or `Iterable[float]`, *optional*): The value to use for the padding if `mode` is `"constant"`. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use same as the input image. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the input image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image. Returns: `np.ndarray`: The padded image. image_transforms.rgb_to_id Converts RGB color to unique ID. image_transforms.rescale Rescales `image` by `scale`. Args: image (`np.ndarray`): The image to rescale. scale (`float`): The scale to use for rescaling the image. data_format (`ChannelDimension`, *optional*): The channel dimension format of the image. If not provided, it will be the same as the input image. dtype (`np.dtype`, *optional*, defaults to `np.float32`): The dtype of the output image. Defaults to `np.float32`. Used for backwards compatibility with feature extractors. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred from the input image. Returns: `np.ndarray`: The rescaled image. image_transforms.resize Resizes `image` to `(height, width)` specified by `size` using the PIL library. Args: image (`np.ndarray`): The image to resize. size (`Tuple[int, int]`): The size to use for resizing the image. resample (`int`, *optional*, defaults to `PILImageResampling.BILINEAR`): The filter to user for resampling. reducing_gap (`int`, *optional*): Apply optimization by resizing the image in two steps. The bigger `reducing_gap`, the closer the result to the fair resampling. See corresponding Pillow documentation for more details. data_format (`ChannelDimension`, *optional*): The channel dimension format of the output image. If unset, will use the inferred format from the input. return_numpy (`bool`, *optional*, defaults to `True`): Whether or not to return the resized image as a numpy array. If False a `PIL.Image.Image` object is returned. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If unset, will use the inferred format from the input. Returns: `np.ndarray`: The resized image. image_transforms.to_pil_image Converts `image` to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if needed. Args: image (`PIL.Image.Image` or `numpy.ndarray` or `torch.Tensor` or `tf.Tensor`): The image to convert to the `PIL.Image` format. do_rescale (`bool`, *optional*): Whether or not to apply the scaling factor (to make pixel values integers between 0 and 255). Will default to `True` if the image type is a floating type and casting to `int` would result in a loss of precision, and `False` otherwise. image_mode (`str`, *optional*): The mode to use for the PIL image. If unset, will use the default mode for the input image type. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If unset, will use the inferred format from the input. Returns: `PIL.Image.Image`: The converted image.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#image-transformations
#image-transformations
.md
430_2
image_processing_utils.ImageProcessingMixin This is an image processor mixin used to provide saving/loading functionality for sequential and image feature extractors.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/image_processing_utils.md
https://huggingface.co/docs/transformers/en/internal/image_processing_utils/#imageprocessingmixin
#imageprocessingmixin
.md
430_3
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/
.md
431_0
This page lists all the utility functions used by [`Trainer`]. Most of those are only useful if you are studying the code of the Trainer in the library.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#utilities-for-trainer
#utilities-for-trainer
.md
431_1
Evaluation output (always contains labels), to be used to compute metrics. Parameters: predictions (`np.ndarray`): Predictions of the model. label_ids (`np.ndarray`): Targets to be matched. inputs (`np.ndarray`, *optional*): Input data passed to the model. losses (`np.ndarray`, *optional*): Loss values computed during evaluation. IntervalStrategy Helper function for reproducible behavior during distributed training. See - https://pytorch.org/docs/stable/notes/randomness.html for pytorch - https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_op_determinism for tensorflow Helper function for reproducible behavior to set the seed in `random`, `numpy`, `torch` and/or `tf` (if installed). Args: seed (`int`): The seed to set. deterministic (`bool`, *optional*, defaults to `False`): Whether to use deterministic algorithms where available. Can slow down training. Decorator to make all processes in distributed training wait for each local_master to do something. Args: local_rank (`int`): The rank of the local process.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#utilities
#utilities
.md
431_2
trainer_callback.CallbackHandler Internal class that just calls the list of callbacks in order.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#callbacks-internals
#callbacks-internals
.md
431_3
trainer_pt_utils.DistributedTensorGatherer A class responsible for properly gathering tensors (or nested list/tuple of tensors) on the CPU by chunks. If our dataset has 16 samples with a batch size of 2 on 3 processes and we gather then transfer on CPU at every step, our sampler will generate the following indices: `[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 1]` to get something of size a multiple of 3 (so that each process gets the same dataset length). Then process 0, 1 and 2 will be responsible of making predictions for the following samples: - P0: `[0, 1, 2, 3, 4, 5]` - P1: `[6, 7, 8, 9, 10, 11]` - P2: `[12, 13, 14, 15, 0, 1]` The first batch treated on each process will be - P0: `[0, 1]` - P1: `[6, 7]` - P2: `[12, 13]` So if we gather at the end of the first batch, we will get a tensor (nested list/tuple of tensor) corresponding to the following indices: `[0, 1, 6, 7, 12, 13]` If we directly concatenate our results without taking any precautions, the user will then get the predictions for the indices in this order at the end of the prediction loop: `[0, 1, 6, 7, 12, 13, 2, 3, 8, 9, 14, 15, 4, 5, 10, 11, 0, 1]` For some reason, that's not going to roll their boat. This class is there to solve that problem. Args: world_size (`int`): The number of processes used in the distributed training. num_samples (`int`): The number of samples in our dataset. make_multiple_of (`int`, *optional*): If passed, the class assumes the datasets passed to each process are made to be a multiple of this argument (by adding samples). padding_index (`int`, *optional*, defaults to -100): The padding index to use if the arrays don't all have the same sequence length.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#distributed-evaluation
#distributed-evaluation
.md
431_4
This subclass of `argparse.ArgumentParser` uses type hints on dataclasses to generate arguments. The class is designed to play well with the native argparse. In particular, you can add more (non-dataclass backed) arguments to the parser after initialization and you'll get the output back after parsing as an additional namespace. Optional: To create sub argument groups use the `_argument_group_name` attribute in the dataclass.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#trainer-argument-parser
#trainer-argument-parser
.md
431_5
debug_utils.DebugUnderflowOverflow This debug class helps detect and understand where the model starts getting very large or very small, and more importantly `nan` or `inf` weight and activation elements. There are 2 working modes: 1. Underflow/overflow detection (default) 2. Specific batch absolute min/max tracing without detection Mode 1: Underflow/overflow detection To activate the underflow/overflow detection, initialize the object with the model : ```python debug_overflow = DebugUnderflowOverflow(model) ``` then run the training as normal and if `nan` or `inf` gets detected in at least one of the weight, input or output elements this module will throw an exception and will print `max_frames_to_save` frames that lead to this event, each frame reporting 1. the fully qualified module name plus the class name whose `forward` was run 2. the absolute min and max value of all elements for each module weights, and the inputs and output For example, here is the header and the last few frames in detection report for `google/mt5-small` run in fp16 mixed precision : ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` You can see here, that `T5DenseGatedGeluDense.forward` resulted in output activations, whose absolute max value was around 62.7K, which is very close to fp16's top limit of 64K. In the next frame we have `Dropout` which renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than 64K, and we get an overlow. As you can see it's the previous frames that we need to look into when the numbers start going into very large for fp16 numbers. The tracking is done in a forward hook, which gets invoked immediately after `forward` has completed. By default the last 21 frames are printed. You can change the default to adjust for your needs. For example : ```python debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ``` To validate that you have set up this debugging feature correctly, and you intend to use it in a training that may take hours to complete, first run it with normal tracing enabled for one of a few batches as explained in the next section. Mode 2. Specific batch absolute min/max tracing without detection The second work mode is per-batch tracing with the underflow/overflow detection feature turned off. Let's say you want to watch the absolute min and max values for all the ingredients of each `forward` call of a given batch, and only do that for batches 1 and 3. Then you instantiate this class as : ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ``` And now full batches 1 and 3 will be traced using the same format as explained above. Batches are 0-indexed. This is helpful if you know that the program starts misbehaving after a certain batch number, so you can fast-forward right to that area. Early stopping: You can also specify the batch number after which to stop the training, with : ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ``` This feature is mainly useful in the tracing mode, but you can use it for any mode. **Performance**: As this module measures absolute `min`/``max` of each weight of the model on every forward it'll slow the training down. Therefore remember to turn it off once the debugging needs have been met. Args: model (`nn.Module`): The model to debug. max_frames_to_save (`int`, *optional*, defaults to 21): How many frames back to record trace_batch_nums(`List[int]`, *optional*, defaults to `[]`): Which batch numbers to trace (turns detection off) abort_after_batch_num (`int``, *optional*): Whether to abort after a certain batch number has finished
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/trainer_utils.md
https://huggingface.co/docs/transformers/en/internal/trainer_utils/#debug-utilities
#debug-utilities
.md
431_6
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/
.md
432_0
This page lists all the utility functions the library provides for pipelines. Most of those are only useful if you are studying the code of the models in the library.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#utilities-for-pipelines
#utilities-for-pipelines
.md
432_1
pipelines.ArgumentHandler Base interface for handling arguments for each [`~pipelines.Pipeline`]. pipelines.ZeroShotClassificationArgumentHandler Handles arguments for zero-shot for text classification by turning each possible label into an NLI premise/hypothesis pair. pipelines.QuestionAnsweringArgumentHandler QuestionAnsweringPipeline requires the user to provide multiple arguments (i.e. question & context) to be mapped to internal [`SquadExample`]. QuestionAnsweringArgumentHandler manages all the possible to create a [`SquadExample`] from the command-line supplied arguments.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#argument-handling
#argument-handling
.md
432_2
pipelines.PipelineDataFormat Base class for all the pipeline supported data format both for reading and writing. Supported data formats currently includes: - JSON - CSV - stdin/stdout (pipe) `PipelineDataFormat` also includes some utilities to work with multi-columns like mapping from datasets columns to pipelines keyword arguments through the `dataset_kwarg_1=dataset_column_1` format. Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data. column (`str`): The column to read. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the `output_path`. pipelines.CsvPipelineDataFormat Support for pipelines using CSV data format. Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data. column (`str`): The column to read. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the `output_path`. pipelines.JsonPipelineDataFormat Support for pipelines using JSON file format. Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data. column (`str`): The column to read. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the `output_path`. pipelines.PipedPipelineDataFormat Read data from piped input to the python process. For multi columns data, columns should separated by If columns are provided, then the output will be a dictionary with {column_x: value_x} Args: output_path (`str`): Where to save the outgoing data. input_path (`str`): Where to look for the input data. column (`str`): The column to read. overwrite (`bool`, *optional*, defaults to `False`): Whether or not to overwrite the `output_path`.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#data-format
#data-format
.md
432_3
pipelines.PipelineException Raised by a [`Pipeline`] when handling __call__. Args: task (`str`): The task of the pipeline. model (`str`): The model used by the pipeline. reason (`str`): The error message to display.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/internal/pipelines_utils.md
https://huggingface.co/docs/transformers/en/internal/pipelines_utils/#utilities
#utilities
.md
432_4
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/
.md
433_0
<Tip> Try AWQ quantization with this [notebook](https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY)! </Tip> [Activation-aware Weight Quantization (AWQ)](https://hf.co/papers/2306.00978) doesn't quantize all the weights in a model, and instead, it preserves a small percentage of weights that are important for LLM performance. This significantly reduces quantization loss such that you can run models in 4-bit precision without experiencing any performance degradation. There are several libraries for quantizing models with the AWQ algorithm, such as [llm-awq](https://github.com/mit-han-lab/llm-awq), [autoawq](https://github.com/casper-hansen/AutoAWQ) or [optimum-intel](https://huggingface.co/docs/optimum/main/en/intel/optimization_inc). Transformers supports loading models quantized with the llm-awq and autoawq libraries. This guide will show you how to load models quantized with autoawq, but the process is similar for llm-awq quantized models. Make sure you have autoawq installed: ```bash pip install autoawq ``` AWQ-quantized models can be identified by checking the `quantization_config` attribute in the model's [config.json](https://huggingface.co/TheBloke/zephyr-7B-alpha-AWQ/blob/main/config.json) file: ```json { "_name_or_path": "/workspace/process/huggingfaceh4_zephyr-7b-alpha/source", "architectures": [ "MistralForCausalLM" ], ... ... ... "quantization_config": { "quant_method": "awq", "zero_point": true, "group_size": 128, "bits": 4, "version": "gemm" } } ``` A quantized model is loaded with the [`~PreTrainedModel.from_pretrained`] method. If you loaded your model on the CPU, make sure to move it to a GPU device first. Use the `device_map` parameter to specify where to place the model: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda:0") ``` Loading an AWQ-quantized model automatically sets other weights to fp16 by default for performance reasons. If you want to load these other weights in a different format, use the `torch_dtype` parameter: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float32) ``` AWQ quantization can also be combined with [FlashAttention-2](../perf_infer_gpu_one#flashattention-2) to further accelerate inference: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("TheBloke/zephyr-7B-alpha-AWQ", attn_implementation="flash_attention_2", device_map="cuda:0") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#awq
#awq
.md
433_1
Fused modules offers improved accuracy and performance and it is supported out-of-the-box for AWQ modules for [Llama](https://huggingface.co/meta-llama) and [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) architectures, but you can also fuse AWQ modules for unsupported architectures. <Tip warning={true}> Fused modules cannot be combined with other optimization techniques such as FlashAttention-2. </Tip> <hfoptions id="fuse"> <hfoption id="supported architectures"> To enable fused modules for supported architectures, create an [`AwqConfig`] and set the parameters `fuse_max_seq_len` and `do_fuse=True`. The `fuse_max_seq_len` parameter is the total sequence length and it should include the context length and the expected generation length. You can set it to a larger value to be safe. For example, to fuse the AWQ modules of the [TheBloke/Mistral-7B-OpenOrca-AWQ](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ) model. ```python import torch from transformers import AwqConfig, AutoModelForCausalLM model_id = "TheBloke/Mistral-7B-OpenOrca-AWQ" quantization_config = AwqConfig( bits=4, fuse_max_seq_len=512, do_fuse=True, ) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0) ``` The [TheBloke/Mistral-7B-OpenOrca-AWQ](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ) model was benchmarked with `batch_size=1` with and without fused modules. <figcaption class="text-center text-gray-500 text-lg">Unfused module</figcaption> | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) | |-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------| | 1 | 32 | 32 | 60.0984 | 38.4537 | 4.50 GB (5.68%) | | 1 | 64 | 64 | 1333.67 | 31.6604 | 4.50 GB (5.68%) | | 1 | 128 | 128 | 2434.06 | 31.6272 | 4.50 GB (5.68%) | | 1 | 256 | 256 | 3072.26 | 38.1731 | 4.50 GB (5.68%) | | 1 | 512 | 512 | 3184.74 | 31.6819 | 4.59 GB (5.80%) | | 1 | 1024 | 1024 | 3148.18 | 36.8031 | 4.81 GB (6.07%) | | 1 | 2048 | 2048 | 2927.33 | 35.2676 | 5.73 GB (7.23%) | <figcaption class="text-center text-gray-500 text-lg">Fused module</figcaption> | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) | |-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------| | 1 | 32 | 32 | 81.4899 | 80.2569 | 4.00 GB (5.05%) | | 1 | 64 | 64 | 1756.1 | 106.26 | 4.00 GB (5.05%) | | 1 | 128 | 128 | 2479.32 | 105.631 | 4.00 GB (5.06%) | | 1 | 256 | 256 | 1813.6 | 85.7485 | 4.01 GB (5.06%) | | 1 | 512 | 512 | 2848.9 | 97.701 | 4.11 GB (5.19%) | | 1 | 1024 | 1024 | 3044.35 | 87.7323 | 4.41 GB (5.57%) | | 1 | 2048 | 2048 | 2715.11 | 89.4709 | 5.57 GB (7.04%) | The speed and throughput of fused and unfused modules were also tested with the [optimum-benchmark](https://github.com/huggingface/optimum-benchmark) library. <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/fused_forward_memory_plot.png" alt="generate throughput per batch size" /> <figcaption class="mt-2 text-center text-sm text-gray-500">forward peak memory/batch size</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/fused_generate_throughput_plot.png" alt="forward latency per batch size" /> <figcaption class="mt-2 text-center text-sm text-gray-500">generate throughput/batch size</figcaption> </div> </div> </hfoption> <hfoption id="unsupported architectures"> For architectures that don't support fused modules yet, you need to create a custom fusing mapping to define which modules need to be fused with the `modules_to_fuse` parameter. For example, to fuse the AWQ modules of the [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) model. ```python import torch from transformers import AwqConfig, AutoModelForCausalLM model_id = "TheBloke/Yi-34B-AWQ" quantization_config = AwqConfig( bits=4, fuse_max_seq_len=512, modules_to_fuse={ "attention": ["q_proj", "k_proj", "v_proj", "o_proj"], "layernorm": ["ln1", "ln2", "norm"], "mlp": ["gate_proj", "up_proj", "down_proj"], "use_alibi": False, "num_attention_heads": 56, "num_key_value_heads": 8, "hidden_size": 7168 } ) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0) ``` The parameter `modules_to_fuse` should include: - `"attention"`: The names of the attention layers to fuse in the following order: query, key, value and output projection layer. If you don't want to fuse these layers, pass an empty list. - `"layernorm"`: The names of all the LayerNorm layers you want to replace with a custom fused LayerNorm. If you don't want to fuse these layers, pass an empty list. - `"mlp"`: The names of the MLP layers you want to fuse into a single MLP layer in the order: (gate (dense, layer, post-attention) / up / down layers). - `"use_alibi"`: If your model uses ALiBi positional embedding. - `"num_attention_heads"`: The number of attention heads. - `"num_key_value_heads"`: The number of key value heads that should be used to implement Grouped Query Attention (GQA). If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA), otherwise GQA is used. - `"hidden_size"`: The dimension of the hidden representations. </hfoption> </hfoptions>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#fused-modules
#fused-modules
.md
433_2
Recent versions of `autoawq` supports ExLlama-v2 kernels for faster prefill and decoding. To get started, first install the latest version of `autoawq` by running: ```bash pip install git+https://github.com/casper-hansen/AutoAWQ.git ``` Get started by passing an `AwqConfig()` with `version="exllama"`. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig quantization_config = AwqConfig(version="exllama") model = AutoModelForCausalLM.from_pretrained( "TheBloke/Mistral-7B-Instruct-v0.1-AWQ", quantization_config=quantization_config, device_map="auto", ) input_ids = torch.randint(0, 100, (1, 128), dtype=torch.long, device="cuda") output = model(input_ids) print(output.logits) tokenizer = AutoTokenizer.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-AWQ") input_ids = tokenizer.encode("How to make a cake", return_tensors="pt").to(model.device) output = model.generate(input_ids, do_sample=True, max_length=50, pad_token_id=50256) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` <Tip warning={true}> Note this feature is supported on AMD GPUs. </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#exllama-v2-support
#exllama-v2-support
.md
433_3
Recent versions of `autoawq` supports CPU with ipex op optimizations. To get started, first install the latest version of `autoawq` by running: ```bash pip install intel-extension-for-pytorch pip install git+https://github.com/casper-hansen/AutoAWQ.git ``` Get started by passing an `AwqConfig()` with `version="ipex"`. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig quantization_config = AwqConfig(version="ipex") model = AutoModelForCausalLM.from_pretrained( "TheBloke/TinyLlama-1.1B-Chat-v0.3-AWQ", quantization_config=quantization_config, device_map="cpu", ) input_ids = torch.randint(0, 100, (1, 128), dtype=torch.long, device="cpu") output = model(input_ids) print(output.logits) tokenizer = AutoTokenizer.from_pretrained("TheBloke/TinyLlama-1.1B-Chat-v0.3-AWQ") input_ids = tokenizer.encode("How to make a cake", return_tensors="pt") pad_token_id = tokenizer.eos_token_id output = model.generate(input_ids, do_sample=True, max_length=50, pad_token_id=pad_token_id) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` <Tip warning={true}> Note this feature is supported on Intel CPUs. </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/awq.md
https://huggingface.co/docs/transformers/en/quantization/awq/#cpu-support
#cpu-support
.md
433_4
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/eetq.md
https://huggingface.co/docs/transformers/en/quantization/eetq/
.md
434_0
The [EETQ](https://github.com/NetEase-FuXi/EETQ) library supports int8 per-channel weight-only quantization for NVIDIA GPUS. The high-performance GEMM and GEMV kernels are from FasterTransformer and TensorRT-LLM. It requires no calibration dataset and does not need to pre-quantize your model. Moreover, the accuracy degradation is negligible owing to the per-channel quantization. Make sure you have eetq installed from the [release page](https://github.com/NetEase-FuXi/EETQ/releases) ``` pip install --no-cache-dir https://github.com/NetEase-FuXi/EETQ/releases/download/v1.0.0/EETQ-1.0.0+cu121+torch2.1.2-cp310-cp310-linux_x86_64.whl ``` or via the source code https://github.com/NetEase-FuXi/EETQ. EETQ requires CUDA capability <= 8.9 and >= 7.0 ``` git clone https://github.com/NetEase-FuXi/EETQ.git cd EETQ/ git submodule update --init --recursive pip install . ``` An unquantized model can be quantized via "from_pretrained". ```py from transformers import AutoModelForCausalLM, EetqConfig path = "/path/to/model" quantization_config = EetqConfig("int8") model = AutoModelForCausalLM.from_pretrained(path, device_map="auto", quantization_config=quantization_config) ``` A quantized model can be saved via "saved_pretrained" and be reused again via the "from_pretrained". ```py quant_path = "/path/to/save/quantized/model" model.save_pretrained(quant_path) model = AutoModelForCausalLM.from_pretrained(quant_path, device_map="auto") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/eetq.md
https://huggingface.co/docs/transformers/en/quantization/eetq/#eetq
#eetq
.md
434_1
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/
.md
435_0
[bitsandbytes](https://github.com/TimDettmers/bitsandbytes) is the easiest option for quantizing a model to 8 and 4-bit. 8-bit quantization multiplies outliers in fp16 with non-outliers in int8, converts the non-outlier values back to fp16, and then adds them together to return the weights in fp16. This reduces the degradative effect outlier values have on a model's performance. 4-bit quantization compresses a model even further, and it is commonly used with [QLoRA](https://hf.co/papers/2305.14314) to finetune quantized LLMs. To use bitsandbytes, make sure you have the following libraries installed: <hfoptions id="bnb"> <hfoption id="8-bit"> ```bash pip install transformers accelerate bitsandbytes>0.37.0 ``` </hfoption> <hfoption id="4-bit"> ```bash pip install bitsandbytes>=0.39.0 pip install --upgrade accelerate transformers ``` </hfoption> </hfoptions> <Tip> bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit [this link](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend). We value your feedback to help identify bugs before the full release! Check out [these docs](https://huggingface.co/docs/bitsandbytes/main/en/non_cuda_backends) for more details and feedback links. </Tip> Now you can quantize a model by passing a `BitsAndBytesConfig` to [`~PreTrainedModel.from_pretrained`] method. This works for any model in any modality, as long as it supports loading with Accelerate and contains `torch.nn.Linear` layers. <hfoptions id="bnb"> <hfoption id="8-bit"> Quantizing a model in 8-bit halves the memory-usage, and for large models, set `device_map="auto"` to efficiently use the GPUs available: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) model_8bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", quantization_config=quantization_config ) ``` By default, all the other modules such as `torch.nn.LayerNorm` are converted to `torch.float16`. You can change the data type of these modules with the `torch_dtype` parameter if you want. Setting `torch_dtype="auto"` loads the model in the data type defined in a model's `config.json` file. ```py import torch from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) model_8bit = AutoModelForCausalLM.from_pretrained( "facebook/opt-350m", quantization_config=quantization_config, torch_dtype="auto" ) model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype ``` Once a model is quantized to 8-bit, you can't push the quantized weights to the Hub unless you're using the latest version of Transformers and bitsandbytes. If you have the latest versions, then you can push the 8-bit model to the Hub with the [`~PreTrainedModel.push_to_hub`] method. The quantization config.json file is pushed first, followed by the quantized model weights. ```py from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-560m", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") model.push_to_hub("bloom-560m-8bit") ``` </hfoption> <hfoption id="4-bit"> Quantizing a model in 4-bit reduces your memory-usage by 4x, and for large models, set `device_map="auto"` to efficiently use the GPUs available: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) model_4bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", quantization_config=quantization_config ) ``` By default, all the other modules such as `torch.nn.LayerNorm` are converted to `torch.float16`. You can change the data type of these modules with the `torch_dtype` parameter if you want. Setting `torch_dtype="auto"` loads the model in the data type defined in a model's `config.json` file. ```py import torch from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) model_4bit = AutoModelForCausalLM.from_pretrained( "facebook/opt-350m", quantization_config=quantization_config, torch_dtype="auto" ) model_4bit.model.decoder.layers[-1].final_layer_norm.weight.dtype ``` If you have `bitsandbytes>=0.41.3`, you can serialize 4-bit models and push them on Hugging Face Hub. Simply call `model.push_to_hub()` after loading it in 4-bit precision. You can also save the serialized 4-bit models locally with `model.save_pretrained()` command. </hfoption> </hfoptions> <Tip warning={true}> Training with 8-bit and 4-bit weights are only supported for training *extra* parameters. </Tip> You can check your memory footprint with the `get_memory_footprint` method: ```py print(model.get_memory_footprint()) ``` Quantized models can be loaded from the [`~PreTrainedModel.from_pretrained`] method without needing to specify the `load_in_8bit` or `load_in_4bit` parameters: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#bitsandbytes
#bitsandbytes
.md
435_1
<Tip> Learn more about the details of 8-bit quantization in this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration)! </Tip> This section explores some of the specific features of 8-bit models, such as offloading, outlier thresholds, skipping module conversion, and finetuning.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#8-bit-llmint8-algorithm
#8-bit-llmint8-algorithm
.md
435_2
8-bit models can offload weights between the CPU and GPU to support fitting very large models into memory. The weights dispatched to the CPU are actually stored in **float32**, and aren't converted to 8-bit. For example, to enable offloading for the [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) model, start by creating a [`BitsAndBytesConfig`]: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True) ``` Design a custom device map to fit everything on your GPU except for the `lm_head`, which you'll dispatch to the CPU: ```py device_map = { "transformer.word_embeddings": 0, "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h": 0, "transformer.ln_f": 0, } ``` Now load your model with the custom `device_map` and `quantization_config`: ```py model_8bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", torch_dtype="auto", device_map=device_map, quantization_config=quantization_config, ) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#offloading
#offloading
.md
435_3
An "outlier" is a hidden state value greater than a certain threshold, and these values are computed in fp16. While the values are usually normally distributed ([-3.5, 3.5]), this distribution can be very different for large models ([-60, 6] or [6, 60]). 8-bit quantization works well for values ~5, but beyond that, there is a significant performance penalty. A good default threshold value is 6, but a lower threshold may be needed for more unstable models (small models or finetuning). To find the best threshold for your model, we recommend experimenting with the `llm_int8_threshold` parameter in [`BitsAndBytesConfig`]: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_threshold=10, ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype="auto", device_map=device_map, quantization_config=quantization_config, ) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#outlier-threshold
#outlier-threshold
.md
435_4
For some models, like [Jukebox](model_doc/jukebox), you don't need to quantize every module to 8-bit which can actually cause instability. With Jukebox, there are several `lm_head` modules that should be skipped using the `llm_int8_skip_modules` parameter in [`BitsAndBytesConfig`]: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_skip_modules=["lm_head"], ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype="auto", device_map="auto", quantization_config=quantization_config, ) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#skip-module-conversion
#skip-module-conversion
.md
435_5
With the [PEFT](https://github.com/huggingface/peft) library, you can finetune large models like [flan-t5-large](https://huggingface.co/google/flan-t5-large) and [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) with 8-bit quantization. You don't need to pass the `device_map` parameter for training because it'll automatically load your model on a GPU. However, you can still customize the device map with the `device_map` parameter if you want to (`device_map="auto"` should only be used for inference).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#finetuning
#finetuning
.md
435_6
<Tip> Try 4-bit quantization in this [notebook](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf) and learn more about it's details in this [blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes). </Tip> This section explores some of the specific features of 4-bit models, such as changing the compute data type, using the Normal Float 4 (NF4) data type, and using nested quantization.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#4-bit-qlora-algorithm
#4-bit-qlora-algorithm
.md
435_7
To speedup computation, you can change the data type from float32 (the default value) to bf16 using the `bnb_4bit_compute_dtype` parameter in [`BitsAndBytesConfig`]: ```py import torch from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#compute-data-type
#compute-data-type
.md
435_8
NF4 is a 4-bit data type from the [QLoRA](https://hf.co/papers/2305.14314) paper, adapted for weights initialized from a normal distribution. You should use NF4 for training 4-bit base models. This can be configured with the `bnb_4bit_quant_type` parameter in the [`BitsAndBytesConfig`]: ```py from transformers import BitsAndBytesConfig nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", ) model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", quantization_config=nf4_config) ``` For inference, the `bnb_4bit_quant_type` does not have a huge impact on performance. However, to remain consistent with the model weights, you should use the `bnb_4bit_compute_dtype` and `torch_dtype` values.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#normal-float-4-nf4
#normal-float-4-nf4
.md
435_9
Nested quantization is a technique that can save additional memory at no additional performance cost. This feature performs a second quantization of the already quantized weights to save an additional 0.4 bits/parameter. For example, with nested quantization, you can finetune a [Llama-13b](https://huggingface.co/meta-llama/Llama-2-13b) model on a 16GB NVIDIA T4 GPU with a sequence length of 1024, a batch size of 1, and enabling gradient accumulation with 4 steps. ```py from transformers import BitsAndBytesConfig double_quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) model_double_quant = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b", torch_dtype="auto", quantization_config=double_quant_config) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#nested-quantization
#nested-quantization
.md
435_10
Once quantized, you can dequantize the model to the original precision but this might result in a small quality loss of the model. Make sure you have enough GPU RAM to fit the dequantized model. ```python from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer model_id = "facebook/opt-125m" model = AutoModelForCausalLM.from_pretrained(model_id, BitsAndBytesConfig(load_in_4bit=True)) tokenizer = AutoTokenizer.from_pretrained(model_id) model.dequantize() text = tokenizer("Hello my name is", return_tensors="pt").to(0) out = model.generate(**text) print(tokenizer.decode(out[0])) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitsandbytes.md
https://huggingface.co/docs/transformers/en/quantization/bitsandbytes/#dequantizing-bitsandbytes-models
#dequantizing-bitsandbytes-models
.md
435_11
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/
.md
436_0
[TorchAO](https://github.com/pytorch/ao) is an architecture optimization library for PyTorch, it provides high performance dtypes, optimization techniques and kernels for inference and training, featuring composability with native PyTorch features like `torch.compile`, FSDP etc.. Some benchmark numbers can be found [here](https://github.com/pytorch/ao/tree/main/torchao/quantization#benchmarks). Before you begin, make sure the following libraries are installed with their latest version: ```bash # Updating πŸ€— Transformers to the latest version, as the example script below uses the new auto compilation pip install --upgrade torch torchao transformers ``` By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set `torch_dtype="auto"` to load the weights in the data type defined in a model's `config.json` file to automatically load the most memory-optimal data type. ```py import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer model_name = "meta-llama/Meta-Llama-3-8B" # We support int4_weight_only, int8_weight_only and int8_dynamic_activation_int8_weight # More examples and documentations for arguments can be found in https://github.com/pytorch/ao/tree/main/torchao/quantization#other-available-quantization-techniques quantization_config = TorchAoConfig("int4_weight_only", group_size=128) quantized_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto", quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_name) input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") # auto-compile the quantized model with `cache_implementation="static"` to get speedup output = quantized_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True)) # benchmark the performance import torch.utils.benchmark as benchmark def benchmark_fn(f, *args, **kwargs): # Manual warmup for _ in range(5): f(*args, **kwargs) t0 = benchmark.Timer( stmt="f(*args, **kwargs)", globals={"args": args, "kwargs": kwargs, "f": f}, num_threads=torch.get_num_threads(), ) return f"{(t0.blocked_autorange().mean):.3f}" MAX_NEW_TOKENS = 1000 print("int4wo-128 model:", benchmark_fn(quantized_model.generate, **input_ids, max_new_tokens=MAX_NEW_TOKENS, cache_implementation="static")) bf16_model = AutoModelForCausalLM.from_pretrained(model_name, device_map="cuda", torch_dtype=torch.bfloat16) output = bf16_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") # auto-compile print("bf16 model:", benchmark_fn(bf16_model.generate, **input_ids, max_new_tokens=MAX_NEW_TOKENS, cache_implementation="static")) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#torchao
#torchao
.md
436_1
torchao quantization is implemented with [tensor subclasses](https://pytorch.org/docs/stable/notes/extending.html#subclassing-torch-tensor), it only work with huggingface non-safetensor serialization and deserialization. It relies on `torch.load(..., weights_only=True)` to avoid arbitrary user code execution during load time and use [add_safe_globals](https://pytorch.org/docs/stable/notes/serialization.html#torch.serialization.add_safe_globals) to allowlist some known user functions. The reason why it does not support safe tensor serialization is that wrapper tensor subclass allows maximum flexibility so we want to make sure the effort of supporting new format of quantized Tensor is low, while safe tensor optimizes for maximum safety (no user code execution), it also means we have to make sure to manually support new quantization format. ```py # save quantized model locally output_dir = "llama3-8b-int4wo-128" quantized_model.save_pretrained(output_dir, safe_serialization=False) # push to huggingface hub # save_to = "{user_id}/llama3-8b-int4wo-128" # quantized_model.push_to_hub(save_to, safe_serialization=False) # load quantized model ckpt_id = "llama3-8b-int4wo-128" # or huggingface hub model id loaded_quantized_model = AutoModelForCausalLM.from_pretrained(ckpt_id, device_map="cuda") # confirm the speedup loaded_quantized_model = torch.compile(loaded_quantized_model, mode="max-autotune") print("loaded int4wo-128 model:", benchmark_fn(loaded_quantized_model.generate, **input_ids, max_new_tokens=MAX_NEW_TOKENS)) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/torchao.md
https://huggingface.co/docs/transformers/en/quantization/torchao/#serialization-and-deserialization
#serialization-and-deserialization
.md
436_2
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/
.md
437_0
<Tip> Try GPTQ quantization with PEFT in this [notebook](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) and learn more about it's details in this [blog post](https://huggingface.co/blog/gptq-integration)! </Tip> Both [GPTQModel](https://github.com/ModelCloud/GPTQModel) and [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) libraries implement the GPTQ algorithm, a post-training quantization technique where each row of the weight matrix is quantized independently to find a version of the weights that minimizes error. These weights are quantized to int4, stored as int32 (int4 x 8) and dequantized (restored) to fp16 on the fly during inference. This can save memory by almost 4x because the int4 weights are often dequantized in a fused kernel. You can also expect a substantial speedup in inference due to lower bandwidth requirements for lower bitwidth. [GPTQModel](https://github.com/ModelCloud/GPTQModel) started as a maintained fork of AutoGPTQ but has since differentiated itself with the following major differences. * Model support: GPTQModel continues to support all of the latest LLM models. * Multimodal support: GPTQModel supports accurate quantization of Qwen 2-VL and Ovis 1.6-VL image-to-text models. * Platform support: Linux, macOS (Apple Silicon), and Windows 11. * Hardware support: NVIDIA CUDA, AMD ROCm, Apple Silicon M1/MPS /CPU, Intel/AMD CPU, and Intel Datacenter Max/Arc GPUs. * Asymmetric support: Asymmetric quantization can potentially introduce lower quantization errors compared to symmetric quantization. However, it is not backward compatible with AutoGPTQ, and not all kernels, such as Marlin, support asymmetric quantization. * IPEX kernel for Intel/AMD accelerated CPU and Intel GPU (Datacenter Max/Arc GPUs) support. * Updated Marlin kernel from Neural Magic optimized for A100 (Ampere). * Updated kernels with auto-padding for legacy model support and models with non-uniform in/out-features. * Faster quantization, lower memory usage, and more accurate default quantization via GPTQModel quantization APIs. * User and developer friendly APIs. [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) will likely be deprecated in the future due the lack of continued support for new models and features. Before you begin, make sure the following libraries are installed and updated to the latest release: ```bash pip install --upgrade accelerate optimum transformers ``` Then install either GPTQModel or AutoGPTQ. ```bash pip install gptqmodel --no-build-isolation ``` or ```bash pip install auto-gptq --no-build-isolation ``` To quantize a model (currently only supported for text models), you need to create a [`GPTQConfig`] class and set the number of bits to quantize to, a dataset to calibrate the weights for quantization, and a tokenizer to prepare the dataset. ```py from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig model_id = "facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_id) gptq_config = GPTQConfig(bits=4, dataset="c4", tokenizer=tokenizer) ``` You could also pass your own dataset as a list of strings, but it is highly recommended to use the same dataset from the GPTQ paper. ```py dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."] gptq_config = GPTQConfig(bits=4, dataset=dataset, tokenizer=tokenizer) ``` Load a model to quantize and pass the `gptq_config` to the [`~AutoModelForCausalLM.from_pretrained`] method. Set `device_map="auto"` to automatically offload the model to a CPU to help fit the model in memory, and allow the model modules to be moved between the CPU and GPU for quantization. ```py quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config) ``` If you're running out of memory because a dataset is too large, disk offloading is not supported. If this is the case, try passing the `max_memory` parameter to allocate the amount of memory to use on your device (GPU and CPU): ```py quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", max_memory={0: "30GiB", 1: "46GiB", "cpu": "30GiB"}, quantization_config=gptq_config) ``` <Tip warning={true}> Depending on your hardware, it can take some time to quantize a model from scratch. It can take ~5 minutes to quantize the [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) model on a free-tier Google Colab GPU, but it'll take ~4 hours to quantize a 175B parameter model on a NVIDIA A100. Before you quantize a model, it is a good idea to check the Hub if a GPTQ-quantized version of the model already exists. </Tip> Once your model is quantized, you can push the model and tokenizer to the Hub where it can be easily shared and accessed. Use the [`~PreTrainedModel.push_to_hub`] method to save the [`GPTQConfig`]: ```py quantized_model.push_to_hub("opt-125m-gptq") tokenizer.push_to_hub("opt-125m-gptq") ``` You could also save your quantized model locally with the [`~PreTrainedModel.save_pretrained`] method. If the model was quantized with the `device_map` parameter, make sure to move the entire model to a GPU or CPU before saving it. For example, to save the model on a CPU: ```py quantized_model.save_pretrained("opt-125m-gptq") tokenizer.save_pretrained("opt-125m-gptq") # if quantized with device_map set quantized_model.to("cpu") quantized_model.save_pretrained("opt-125m-gptq") ``` Reload a quantized model with the [`~PreTrainedModel.from_pretrained`] method, and set `device_map="auto"` to automatically distribute the model on all available GPUs to load the model faster without using more memory than needed. ```py from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#gptq
#gptq
.md
437_1
[Marlin](https://github.com/IST-DASLab/marlin) is a 4-bit only CUDA GPTQ kernel, highly optimized for the NVIDIA A100 GPU (Ampere) architecture. Loading, dequantization, and execution of post-dequantized weights are highly parallelized, offering a substantial inference improvement versus the original CUDA GPTQ kernel. Marlin is only available for quantized inference and does not support model quantization. Marlin inference can be activated with the `backend` parameter in [`GPTQConfig`]. ```py from transformers import AutoModelForCausalLM, GPTQConfig model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config=GPTQConfig(bits=4, backend="marlin")) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#marlin
#marlin
.md
437_2
[ExLlama](https://github.com/turboderp/exllama) is a CUDA implementation of the [Llama](model_doc/llama) model that is designed for faster inference with 4-bit GPTQ weights (check out these [benchmarks](https://github.com/huggingface/optimum/tree/main/tests/benchmark#gptq-benchmark)). The ExLlama kernel is activated by default when you create a [`GPTQConfig`] object. To boost inference speed even further, use the [ExLlamaV2](https://github.com/turboderp/exllamav2) kernels by configuring the `exllama_config` parameter: ```py import torch from transformers import AutoModelForCausalLM, GPTQConfig gptq_config = GPTQConfig(bits=4, exllama_config={"version":2}) model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config=gptq_config) ``` <Tip warning={true}> Only 4-bit models are supported, and we recommend deactivating the ExLlama kernels if you're finetuning a quantized model with PEFT. </Tip> The ExLlama kernels are only supported when the entire model is on the GPU. If you're doing inference on a CPU with AutoGPTQ or GPTQModel, then you'll need to disable the ExLlama kernel. This overwrites the attributes related to the ExLlama kernels in the quantization config of the config.json file. ```py import torch from transformers import AutoModelForCausalLM, GPTQConfig gptq_config = GPTQConfig(bits=4, use_exllama=False) model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="cpu", quantization_config=gptq_config) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/gptq.md
https://huggingface.co/docs/transformers/en/quantization/gptq/#exllama
#exllama
.md
437_3
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/
.md
438_0
Quantization techniques focus on representing data with less information while also trying to not lose too much accuracy. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits. <Tip> Interested in adding a new quantization method to Transformers? Read the [HfQuantizer](./contribute) guide to learn how! </Tip> <Tip> If you are new to the quantization field, we recommend you to check out these beginner-friendly courses about quantization in collaboration with DeepLearning.AI: * [Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/) * [Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/) </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#quantization
#quantization
.md
438_1
The community has developed many quantization methods for various use cases. With Transformers, you can run any of these integrated methods depending on your use case because each method has their own pros and cons. For example, some quantization methods require calibrating the model with a dataset for more accurate and "extreme" compression (up to 1-2 bits quantization), while other methods work out of the box with on-the-fly quantization. Another parameter to consider is compatibility with your target device. Do you want to quantize on a CPU, GPU, or Apple silicon? In short, supporting a wide range of quantization methods allows you to pick the best quantization method for your specific use case. Use the table below to help you decide which quantization method to use. | Quantization Method | On the fly quantization | CPU | CUDA GPU | ROCm GPU | Metal (Apple Silicon) | Intel GPU | Torch compile() | Bits | PEFT Fine Tuning | Serializable with πŸ€—Transformers | πŸ€—Transformers Support | Link to library | |-----------------------------------------------|----------------------|-----------------|----------|-----------|------------------------------------|-----------------|-----------------|---------------|------------------|-----------------------------|-------------------------|---------------------------------------------| | [AQLM](./aqlm.md) | πŸ”΄ | 🟒 | 🟒 | πŸ”΄ | πŸ”΄ | πŸ”΄ | 🟒 | 1/2 | 🟒 | 🟒 | 🟒 | https://github.com/Vahe1994/AQLM | | [AWQ](./awq.md) | πŸ”΄ | 🟒 | 🟒 | 🟒 | πŸ”΄ | 🟒 | ? | 4 | 🟒 | 🟒 | 🟒 | https://github.com/casper-hansen/AutoAWQ | | [bitsandbytes](./bitsandbytes.md) | 🟒 | 🟑 <sub>1</sub> | 🟒 | 🟑 <sub>1</sub> | πŸ”΄ <sub>2</sub> | 🟑 <sub>1</sub> | πŸ”΄ <sub>1</sub> | 4/8 | 🟒 | 🟒 | 🟒 | https://github.com/bitsandbytes-foundation/bitsandbytes | | [compressed-tensors](./compressed_tensors.md) | πŸ”΄ | 🟒 | 🟒 | 🟒 | πŸ”΄ | πŸ”΄ | πŸ”΄ | 1/8 | 🟒 | 🟒 | 🟒 | https://github.com/neuralmagic/compressed-tensors | | [EETQ](./eetq.md) | 🟒 | πŸ”΄ | 🟒 | πŸ”΄ | πŸ”΄ | πŸ”΄ | ? | 8 | 🟒 | 🟒 | 🟒 | https://github.com/NetEase-FuXi/EETQ | | [GGUF / GGML (llama.cpp)](../gguf.md) | 🟒 | 🟒 | 🟒 | πŸ”΄ | 🟒 | πŸ”΄ | πŸ”΄ | 1/8 | πŸ”΄ | [See Notes](../gguf.md) | [See Notes](../gguf.md) | https://github.com/ggerganov/llama.cpp | | [GPTQModel](./gptq.md) | πŸ”΄ | 🟒 <sub>3</sub> | 🟒 | 🟒 | 🟒 | 🟒 <sub>4</sub> | πŸ”΄ | 2/3/4/8 | 🟒 | 🟒 | 🟒 | https://github.com/ModelCloud/GPTQModel | | [AutoGPTQ](./gptq.md) | πŸ”΄ | πŸ”΄ | 🟒 | 🟒 | πŸ”΄ | πŸ”΄ | πŸ”΄ | 2/3/4/8 | 🟒 | 🟒 | 🟒 | https://github.com/AutoGPTQ/AutoGPTQ | | [HIGGS](./higgs.md) | 🟒 | πŸ”΄ | 🟒 | πŸ”΄ | πŸ”΄ | πŸ”΄ | 🟒 | 2/4 | πŸ”΄ | 🟒 | 🟒 | https://github.com/HanGuo97/flute | | [HQQ](./hqq.md) | 🟒 | 🟒 | 🟒 | πŸ”΄ | πŸ”΄ | πŸ”΄ | 🟒 | 1/8 | 🟒 | πŸ”΄ | 🟒 | https://github.com/mobiusml/hqq/ | | [optimum-quanto](./quanto.md) | 🟒 | 🟒 | 🟒 | πŸ”΄ | 🟒 | πŸ”΄ | 🟒 | 2/4/8 | πŸ”΄ | πŸ”΄ | 🟒 | https://github.com/huggingface/optimum-quanto | | [FBGEMM_FP8](./fbgemm_fp8.md) | 🟒 | πŸ”΄ | 🟒 | πŸ”΄ | πŸ”΄ | πŸ”΄ | πŸ”΄ | 8 | πŸ”΄ | 🟒 | 🟒 | https://github.com/pytorch/FBGEMM | | [torchao](./torchao.md) | 🟒 | | 🟒 | πŸ”΄ | 🟑 <sub>5</sub> | πŸ”΄ | | 4/8 | | πŸŸ’πŸ”΄ | 🟒 | https://github.com/pytorch/ao | | [VPTQ](./vptq.md) | πŸ”΄ | πŸ”΄ | 🟒 | 🟑 | πŸ”΄ | πŸ”΄ | 🟒 | 1/8 | πŸ”΄ | 🟒 | 🟒 | https://github.com/microsoft/VPTQ | <Tip> **1:** bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit [this link](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend). Check out [these docs](https://huggingface.co/docs/bitsandbytes/main/en/non_cuda_backends) for more details and feedback links. </Tip> <Tip> **2:** bitsandbytes is seeking contributors to help develop and lead the Apple Silicon backend. Interested? Contact them directly via their repo. Stipends may be available through sponsorships. </Tip> <Tip> **3:** GPTQModel[CPU] supports 4-bit via IPEX on Intel/AMD and full bit range via Torch on Intel/AMD/Apple Silicon. </Tip> <Tip> **4:** GPTQModel[Intel GPU] via IPEX only supports 4-bit for Intel Datacenter Max/Arc GPUs. </Tip> <Tip> **5:** torchao only supports int4 weight on Metal (Apple Silicon). </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/overview.md
https://huggingface.co/docs/transformers/en/quantization/overview/#when-to-use-what
#when-to-use-what
.md
438_2
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/
.md
439_0
With FBGEMM FP8 quantization method, you can quantize your model in FP8 (W8A8): - the weights will be quantized in 8bit (FP8) per channel - the activation will be quantized in 8bit (FP8) per token It relies on the [FBGEMM](https://github.com/pytorch/FBGEMM) library which provides efficient low-precision general matrix multiplication for small batch sizes and support for accuracy-loss minimizing techniques such as row-wise quantization and outlier-aware quantization. > [!TIP] > You need a GPU with compute capability>=9 (e.g. H100) Before you begin, make sure the following libraries are installed with their latest version: ```bash pip install --upgrade accelerate fbgemm-gpu torch ``` If you are having issues with fbgemm-gpu and torch library, you might need to install the nightly release. You can follow the instruction [here](https://pytorch.org/FBGEMM/fbgemm_gpu-development/InstallationInstructions.html#fbgemm-gpu-install-libraries:~:text=found%20here.-,Install%20the%20FBGEMM_GPU%20Package,-Install%20through%20PyTorch) By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set `torch_dtype="auto"` to load the weights in the data type defined in a model's `config.json` file to automatically load the most memory-optimal data type. ```py from transformers import FbgemmFp8Config, AutoModelForCausalLM, AutoTokenizer model_name = "meta-llama/Meta-Llama-3-8B" quantization_config = FbgemmFp8Config() quantized_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto", quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_name) input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") output = quantized_model.generate(**input_ids, max_new_tokens=10) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` A quantized model can be saved via "saved_pretrained" and be reused again via the "from_pretrained". ```py quant_path = "/path/to/save/quantized/model" model.save_pretrained(quant_path) model = AutoModelForCausalLM.from_pretrained(quant_path, device_map="auto") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/fbgemm_fp8.md
https://huggingface.co/docs/transformers/en/quantization/fbgemm_fp8/#fbgemm-fp8
#fbgemm-fp8
.md
439_1
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/optimum.md
https://huggingface.co/docs/transformers/en/quantization/optimum/
.md
440_0