text
stringlengths
5
58.6k
source
stringclasses
470 values
url
stringlengths
49
167
source_section
stringlengths
0
90
file_type
stringclasses
1 value
id
stringlengths
3
6
The [Optimum](https://huggingface.co/docs/optimum/index) library supports quantization for Intel, Furiosa, ONNX Runtime, GPTQ, and lower-level PyTorch quantization functions. Consider using Optimum for quantization if you're using specific and optimized hardware like Intel CPUs, Furiosa NPUs or a model accelerator like ONNX Runtime.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/optimum.md
https://huggingface.co/docs/transformers/en/quantization/optimum/#optimum
#optimum
.md
440_1
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/
.md
441_0
<Tip> Try optimum-quanto + transformers with this [notebook](https://colab.research.google.com/drive/16CXfVmtdQvciSh9BopZUDYcmXCDpvgrT?usp=sharing)! </Tip> [🤗 optimum-quanto](https://github.com/huggingface/optimum-quanto) library is a versatile pytorch quantization toolkit. The quantization method used is the linear quantization. Quanto provides several unique features such as: - weights quantization (`float8`,`int8`,`int4`,`int2`) - activation quantization (`float8`,`int8`) - modality agnostic (e.g CV,LLM) - device agnostic (e.g CUDA,XPU,MPS,CPU) - compatibility with `torch.compile` - easy to add custom kernel for specific device - supports quantization aware training <!-- Add link to the blogpost --> Before you begin, make sure the following libraries are installed: ```bash pip install optimum-quanto accelerate transformers ``` Now you can quantize a model by passing [`QuantoConfig`] object in the [`~PreTrainedModel.from_pretrained`] method. This works for any model in any modality, as long as it contains `torch.nn.Linear` layers. The integration with transformers only supports weights quantization. For the more complex use case such as activation quantization, calibration and quantization aware training, you should use [optimum-quanto](https://github.com/huggingface/optimum-quanto) library instead. By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set `torch_dtype="auto"` to load the weights in the data type defined in a model's `config.json` file to automatically load the most memory-optimal data type. ```py from transformers import AutoModelForCausalLM, AutoTokenizer, QuantoConfig model_id = "facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_id) quantization_config = QuantoConfig(weights="int8") quantized_model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="cuda:0", quantization_config=quantization_config) ``` Note that serialization is not supported yet with transformers but it is coming soon! If you want to save the model, you can use quanto library instead. Optimum-quanto library uses linear quantization algorithm for quantization. Even though this is a basic quantization technique, we get very good results! Have a look at the following benchmark (llama-2-7b on perplexity metric). You can find more benchmarks [here](https://github.com/huggingface/optimum-quanto/tree/main/bench/generation) <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/NousResearch-Llama-2-7b-hf_Perplexity.png" alt="llama-2-7b-quanto-perplexity" /> </div> </div> The library is versatile enough to be compatible with most PTQ optimization algorithms. The plan in the future is to integrate the most popular algorithms in the most seamless possible way (AWQ, Smoothquant).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/quanto.md
https://huggingface.co/docs/transformers/en/quantization/quanto/#optimum-quanto
#optimum-quanto
.md
441_1
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/
.md
442_0
Transformers supports and integrates many quantization methods such as QLoRA, GPTQ, LLM.int8, and AWQ. However, there are other quantization approaches that are not yet integrated. To make adding and using these quantization methods with Transformers models easier, you should use the [`HfQuantizer`] class. The [`HfQuantizer`] is designed as an internal helper class for adding a quantization method instead of something you apply to every PyTorch module. This guide will show you how to integrate a new quantization method with the [`HfQuantizer`] class.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#contribute-new-quantization-method
#contribute-new-quantization-method
.md
442_1
Before integrating a new quantization method into Transformers, ensure the method you are trying to add meets the following prerequisites. Only quantization methods that can be run with PyTorch modules are currently supported. - The quantization method is available through a Python package that is pip-installable by anyone (it is also fine if you can only install the package from source). Ideally, pre-compiled kernels are included in the pip package. - The method can run on commonly-used hardware (CPU, GPU, ...). - The method is wrapped in a `nn.Module` (e.g., `Linear8bitLt`, `Linear4bit`), and the quantized linear layer should have the following definition: ```py class Linear4bit(nn.Module): def __init__(self, ...): ... def forward(self, x): return my_4bit_kernel(x, self.weight, self.bias) ``` This way, Transformers models can be easily quantized by replacing some instances of `nn.Linear` with a target class. - The quantization method should be serializable. You can save the quantized weights locally or push them to the Hub. - Make sure the package that contains the quantization kernels/primitive is stable (no frequent breaking changes). For some quantization methods, they may require "pre-quantizing" the models through data calibration (e.g., AWQ). In this case, we prefer to only support inference in Transformers and let the third-party library maintained by the ML community deal with the model quantization itself.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#requirements
#requirements
.md
442_2
1. Create a new quantization config class inside [src/transformers/utils/quantization_config.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/utils/quantization_config.py) and make sure to expose the new quantization config inside Transformers main `init` by adding it to the [`_import_structure`](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/__init__.py#L1088) object of [src/transformers/__init__.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/__init__.py). 2. Create a new file inside [src/transformers/quantizers/](https://github.com/huggingface/transformers/tree/abbffc4525566a48a9733639797c812301218b83/src/transformers/quantizers) named `quantizer_your_method.py`, and make it inherit from [src/transformers/quantizers/base.py::HfQuantizer](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/quantizers/base.py#L28). Make sure to add the new quantizer and quantization config in the quantization auto-mapping in [src/transformers/quantizers/auto.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/quantizers/auto.py). 3. Define the following class attributes/property methods for your quantization method: * `requires_calibration`: Whether the quantization method requires a data calibration process. If set to `True`, you can only support inference (with quantized weights) and not inference and quantization. * `required_packages`: A list of strings of the required packages to use the quantized weights. You might need to define some new utility methods such as `is_auto_awq_available` in [transformers/src/utils/import_utils.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/utils/import_utils.py). * `requires_parameters_quantization`: Only required if your quantization method requires extra attention to the underlying `nn.Parameter` object. For example, bitsandbytes uses `Params4bit` and `Int8Param`, which requires some extra attention when quantizing the model. Most of the recent quantization method packs int2/int4 weights inside `torch.uint8` weights, so this flag should not be really required (set to `False` by default). * `is_serializable`: A property method to determine whether the method is serializable or not. * `is_trainable`: A property method to determine whether you can fine-tune models on top of the quantization method (with or without PEFT approaches). 4. Write the `validate_environment` and `update_torch_dtype` methods. These methods are called before creating the quantized model to ensure users use the right configuration. You can have a look at how this is done on other quantizers. 5. Write the `_process_model_before_weight_loading` method. In Transformers, the quantized models are initialized first on the `"meta"` device before loading the weights. This means the `_process_model_before_weight_loading` method takes care of manipulating the model skeleton to replace some modules (e.g., `nn.Linear`) with the target modules (quantization modules). You can define a module replacement logic or any other utility method by creating a new file in [transformers/src/integrations/](https://github.com/huggingface/transformers/tree/abbffc4525566a48a9733639797c812301218b83/src/transformers/integrations) and exposing the relevant methods in that folder's `__init__.py` file. The best starting point would be to have a look at another quantization methods such as [quantizer_awq.py](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/quantizers/quantizer_awq.py). 6. Write the `_process_model_after_weight_loading` method. This method enables implementing additional features that require manipulating the model after loading the weights. 7. Document everything! Make sure your quantization method is documented by adding a new file under `docs/source/en/quantization` and adding a new row in the table in `docs/source/en/quantization/overview.md`. 8. Add tests! You should add tests by first adding the package in our nightly Dockerfile inside `docker/transformers-quantization-latest-gpu` and then adding a new test file in `tests/quantization/xxx`. Feel free to check out how it is implemented for other quantization methods.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/contribute.md
https://huggingface.co/docs/transformers/en/quantization/contribute/#build-a-new-hfquantizer-class
#build-a-new-hfquantizer-class
.md
442_3
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/
.md
443_0
Half-Quadratic Quantization (HQQ) implements on-the-fly quantization via fast robust optimization. It doesn't require calibration data and can be used to quantize any model. Please refer to the <a href="https://github.com/mobiusml/hqq/">official package</a> for more details. For installation, we recommend you use the following approach to get the latest version and build its corresponding CUDA kernels: ``` pip install hqq ``` To quantize a model, you need to create an [`HqqConfig`]. There are two ways of doing it: ``` Python from transformers import AutoModelForCausalLM, AutoTokenizer, HqqConfig # Method 1: all linear layers will use the same quantization config quant_config = HqqConfig(nbits=8, group_size=64) ``` ``` Python # Method 2: each linear layer with the same tag will use a dedicated quantization config q4_config = {'nbits':4, 'group_size':64} q3_config = {'nbits':3, 'group_size':32} quant_config = HqqConfig(dynamic_config={ 'self_attn.q_proj':q4_config, 'self_attn.k_proj':q4_config, 'self_attn.v_proj':q4_config, 'self_attn.o_proj':q4_config, 'mlp.gate_proj':q3_config, 'mlp.up_proj' :q3_config, 'mlp.down_proj':q3_config, }) ``` The second approach is especially interesting for quantizing Mixture-of-Experts (MoEs) because the experts are less affected by lower quantization settings. Then you simply quantize the model as follows ``` Python model = transformers.AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="cuda", quantization_config=quant_config ) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/#hqq
#hqq
.md
443_1
HQQ supports various backends, including pure PyTorch and custom dequantization CUDA kernels. These backends are suitable for older gpus and peft/QLoRA training. For faster inference, HQQ supports 4-bit fused kernels (TorchAO and Marlin), reaching up to 200 tokens/sec on a single 4090. For more details on how to use the backends, please refer to https://github.com/mobiusml/hqq/?tab=readme-ov-file#backend
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/hqq.md
https://huggingface.co/docs/transformers/en/quantization/hqq/#optimized-runtime
#optimized-runtime
.md
443_2
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/
.md
444_0
The [`compressed-tensors`](https://github.com/neuralmagic/compressed-tensors) library provides a versatile and efficient way to store and manage compressed model checkpoints. This library supports various quantization and sparsity schemes, making it a unified format for handling different model optimizations like GPTQ, AWQ, SmoothQuant, INT8, FP8, SparseGPT, and more. Some of the supported formats include: 1. `dense` 2. `int-quantized` ([sample](https://huggingface.co/nm-testing/tinyllama-w8a8-compressed-hf-quantizer)): INT8 quantized models 3. `float-quantized` ([sample](https://huggingface.co/nm-testing/Meta-Llama-3-8B-Instruct-fp8-hf_compat)): FP8 quantized models; currently support E4M3 4. `pack-quantized` ([sample](https://huggingface.co/nm-testing/tinyllama-w4a16-compressed-hf-quantizer)): INT4 or INT8 weight-quantized models, packed into INT32. For INT4, the weights have an INT4 range but are stored as INT8 and then packed into INT32. Compressed models can be easily created using [llm-compressor](https://github.com/vllm-project/llm-compressor). Alternatively models can be created independently and serialized with a compressed tensors config. To find existing models on the Hugging Face Model Hub, search for the [`compressed-tensors` tag](https://huggingface.co/models?other=compressed-tensors).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#compressed-tensors
#compressed-tensors
.md
444_1
- Weight and activation precisions: FP8, INT4, INT8 (for Q/DQ arbitrary precision is allowed for INT) - Quantization scales and zero-points strategies: [tensor, channel, group, block, token](https://github.com/neuralmagic/compressed-tensors/blob/83b2e7a969d70606421a76b9a3d112646077c8de/src/compressed_tensors/quantization/quant_args.py#L43-L52) - Dynamic per-token activation quantization (or any static strategy) - Sparsity in weights (unstructured or semi-structured like 2:4) can be composed with quantization for extreme compression - Supports quantization of arbitrary modules, not just Linear modules - Targeted support or ignoring of modules by name or class
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#features
#features
.md
444_2
It is recommended to install stable releases of compressed-tensors from [PyPI](https://pypi.org/project/compressed-tensors): ```bash pip install compressed-tensors ``` Developers who want to experiment with the latest features can also install the package from source: ```bash git clone https://github.com/neuralmagic/compressed-tensors cd compressed-tensors pip install -e . ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#installation
#installation
.md
444_3
Quantized models can be easily loaded for inference as shown below. Only models that have already been quantized can be loaded at the moment. To quantize a model into the compressed-tensors format see [llm-compressor](https://github.com/vllm-project/llm-compressor). ```python from transformers import AutoModelForCausalLM # Load the model in compressed-tensors format ct_model = AutoModelForCausalLM.from_pretrained("nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf") # Measure memory usage mem_params = sum([param.nelement()*param.element_size() for param in ct_model.parameters()]) print(f"{mem/2**30:.4f} GB") # 8.4575 GB ``` We can see just above that the compressed-tensors FP8 checkpoint of Llama 3.1 8B is able to be loaded for inference using half of the memory of the unquantized reference checkpoint.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#quickstart-model-load
#quickstart-model-load
.md
444_4
```python from transformers import AutoModelForCausalLM, AutoTokenizer prompt = [ "Hello, my name is", "The capital of France is", "The future of AI is" ] model_name = "nm-testing/Meta-Llama-3-8B-Instruct-fp8-hf_compat" quantized_model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer(prompt, return_tensors="pt") generated_ids = quantized_model.generate(**inputs, max_length=50, do_sample=False) outputs = tokenizer.batch_decode(generated_ids) print(outputs) """ ['<|begin_of_text|>Hello, my name is [Name]. I am a [Your Profession/Student] and I am here to learn about the [Course/Program] at [University/Institution]. I am excited to be here and I am looking forward to', '<|begin_of_text|>The capital of France is Paris, which is located in the north-central part of the country. Paris is the most populous city in France and is known for its stunning architecture, art museums, fashion, and romantic atmosphere. The city is home to', "<|begin_of_text|>The future of AI is here, and it's already changing the way we live and work. From virtual assistants to self-driving cars, AI is transforming industries and revolutionizing the way we interact with technology. But what does the future of AI hold"] """ ``` The above shows a quick example for running generation using a `compressed-tensors` model. Currently, once loaded the model cannot be saved.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#sample-use-cases---load-and-run-an-fp8-model
#sample-use-cases---load-and-run-an-fp8-model
.md
444_5
In this example we will examine how the compressed-tensors model nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf is defined through its configuration entry and see how this translates to the loaded model representation. First, let us look at the [`quantization_config` of the model](https://huggingface.co/nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf/blob/main/config.json). At a glance it looks overwhelming with the number of entries but this is because compressed-tensors is a format that allows for flexible expression both during and after model compression. In practice for checkpoint loading and inference the configuration can be simplified to not include all the default or empty entries, so we will do that here to focus on what compression is actually represented. ```yaml "quantization_config": { "config_groups": { "group_0": { "input_activations": { "num_bits": 8, "strategy": "tensor", "type": "float" }, "targets": ["Linear"], "weights": { "num_bits": 8, "strategy": "tensor", "type": "float" } } }, "format": "naive-quantized", "ignore": ["lm_head"], "quant_method": "compressed-tensors", "quantization_status": "frozen" }, ``` We can see from the above configuration that it is specifying one config group that includes weight and activation quantization to FP8 with a static per-tensor strategy. It is also worth noting that in the `ignore` list there is an entry to skip quantization of the `lm_head` module, so that module should be untouched in the checkpoint. To see the result of the configuration in practice, we can simply use the [safetensors viewer](https://huggingface.co/nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf?show_file_info=model.safetensors.index.json) on the model card to see the quantized weights, input_scale, and weight_scale for all of the Linear modules in the first model layer (and so on for the rest of the layers). | Tensors | Shape |Precision | | ------- | ----- | --------- | model.layers.0.input_layernorm.weight| [4096]| BF16 model.layers.0.mlp.down_proj.input_scale| [1]| BF16 model.layers.0.mlp.down_proj.weight| [4096, 14336] |F8_E4M3 model.layers.0.mlp.down_proj.weight_scale |[1]| BF16 model.layers.0.mlp.gate_proj.input_scale |[1]| BF16 model.layers.0.mlp.gate_proj.weight| [14336, 4096]| F8_E4M3 model.layers.0.mlp.gate_proj.weight_scale| [1] |BF16 model.layers.0.mlp.up_proj.input_scale|[1]|BF16 model.layers.0.mlp.up_proj.weight |[14336, 4096]| F8_E4M3 model.layers.0.mlp.up_proj.weight_scale | [1]| BF16 model.layers.0.post_attention_layernorm.weight |[4096]|BF16 model.layers.0.self_attn.k_proj.input_scale |[1]| BF16 model.layers.0.self_attn.k_proj.weight |[1024, 4096]|F8_E4M3 model.layers.0.self_attn.k_proj.weight_scale |[1]| BF16 model.layers.0.self_attn.o_proj.input_scale| [1]| BF16 model.layers.0.self_attn.o_proj.weight | [4096, 4096]| F8_E4M3 model.layers.0.self_attn.o_proj.weight_scale | [1]| BF16 model.layers.0.self_attn.q_proj.input_scale| [1]| BF16 model.layers.0.self_attn.q_proj.weight | [4096, 4096]| F8_E4M3 model.layers.0.self_attn.q_proj.weight_scale |[1] | BF16 model.layers.0.self_attn.v_proj.input_scale| [1] | BF16 model.layers.0.self_attn.v_proj.weight |[1024, 4096]| F8_E4M3 model.layers.0.self_attn.v_proj.weight_scale |[1] |BF16 When we load the model with the compressed-tensors HFQuantizer integration, we can see that all of the Linear modules that are specified within the quantization configuration have been replaced by `CompressedLinear` modules that manage the compressed weights and forward pass for inference. Note that the `lm_head` mentioned before in the ignore list is still kept as an unquantized Linear module. ```python from transformers import AutoModelForCausalLM ct_model = AutoModelForCausalLM.from_pretrained("nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf") print(ct_model) """ LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(128256, 4096) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaSdpaAttention( (q_proj): CompressedLinear( in_features=4096, out_features=4096, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (k_proj): CompressedLinear( in_features=4096, out_features=1024, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (v_proj): CompressedLinear( in_features=4096, out_features=1024, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (o_proj): CompressedLinear( in_features=4096, out_features=4096, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): CompressedLinear( in_features=4096, out_features=14336, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (up_proj): CompressedLinear( in_features=4096, out_features=14336, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (down_proj): CompressedLinear( in_features=14336, out_features=4096, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm((4096,), eps=1e-05) (post_attention_layernorm): LlamaRMSNorm((4096,), eps=1e-05) ) ) (norm): LlamaRMSNorm((4096,), eps=1e-05) (rotary_emb): LlamaRotaryEmbedding() ) (lm_head): Linear(in_features=4096, out_features=128256, bias=False) ) """ ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/compressed_tensors.md
https://huggingface.co/docs/transformers/en/quantization/compressed_tensors/#deep-dive-into-a-compressed-tensors-model-checkpoint
#deep-dive-into-a-compressed-tensors-model-checkpoint
.md
444_6
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/
.md
445_0
[BitNet](https://arxiv.org/abs/2402.17764) replaces traditional Linear layers in Multi-Head Attention and Feed-Forward Networks with specialized layers called BitLinear with ternary (or binary in the older version) precision. The BitLinear layers introduced here quantize the weights using ternary precision (with values of -1, 0, and 1) and quantize the activations to 8-bit precision. <figure style="text-align: center;"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/1.58llm_extreme_quantization/bitlinear.png" alt="Alt Text" /> <figcaption>The architecture of BitNet with BitLinear layers</figcaption> </figure> During training, we start by quantizing the weights into ternary values, using symmetric per tensor quantization. First, we compute the average of the absolute values of the weight matrix and use this as a scale. We then divide the weights by the scale, round the values, constrain them between -1 and 1, and finally rescale them to continue in full precision. $$ scale_w = \frac{1}{\frac{1}{nm} \sum_{ij} |W_{ij}|} $$ $$ W_q = \text{clamp}_{[-1,1]}(\text{round}(W*scale)) $$ $$ W_{dequantized} = W_q*scale_w $$ Activations are then quantized to a specified bit-width (e.g., 8-bit) using [absmax](https://arxiv.org/pdf/2208.07339) quantization (symmetric per channel quantization). This involves scaling the activations into a range [−128,127[. The quantization formula is: $$ scale_x = \frac{127}{|X|_{\text{max}, \, \text{dim}=-1}} $$ $$ X_q = \text{clamp}_{[-128,127]}(\text{round}(X*scale)) $$ $$ X_{dequantized} = X_q * scale_x $$ To learn more about how we trained, and fine-tuned bitnet models checkout the blogpost [here](https://huggingface.co/blog/1_58_llm_extreme_quantization)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#bitnet
#bitnet
.md
445_1
BitNet models can't be quantized on the fly—they need to be pre-trained or fine-tuned with the quantization applied (it's a Quantization aware training technique). Once trained, these models are already quantized and available as packed versions on the hub. A quantized model can be load : ```py from transformers import AutoModelForCausalLM path = "/path/to/model" model = AutoModelForCausalLM.from_pretrained(path, device_map="auto") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#load-a-bitnet-model-from-the-hub
#load-a-bitnet-model-from-the-hub
.md
445_2
If you're looking to pre-train or fine-tune your own 1.58-bit model using Nanotron, check out this [PR](https://github.com/huggingface/nanotron/pull/180), all you need to get started is there ! For fine-tuning, you'll need to convert the model from Hugging Face format to Nanotron format (which has some differences). You can find the conversion steps in this [PR](https://github.com/huggingface/nanotron/pull/174).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#pre-training--fine-tuning-a-bitnet-model
#pre-training--fine-tuning-a-bitnet-model
.md
445_3
In our initial version, we chose to use `@torch.compile` to unpack the weights and perform the forward pass. It’s very straightforward to implement and delivers significant speed improvements. We plan to integrate additional optimized kernels in future versions.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/bitnet.md
https://huggingface.co/docs/transformers/en/quantization/bitnet/#kernels
#kernels
.md
445_4
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/
.md
446_0
> [!TIP] > Try VPTQ on [Hugging Face](https://huggingface.co/spaces/microsoft/VPTQ)! > Try VPTQ on [Google Colab](https://colab.research.google.com/github/microsoft/VPTQ/blob/main/notebooks/vptq_example.ipynb)! > Know more about VPTQ on [ArXiv](https://arxiv.org/pdf/2409.17066)! Vector Post-Training Quantization ([VPTQ](https://github.com/microsoft/VPTQ)) is a novel Post-Training Quantization method that leverages Vector Quantization to high accuracy on LLMs at an extremely low bit-width (<2-bit). VPTQ can compress 70B, even the 405B model, to 1-2 bits without retraining and maintain high accuracy. - Better Accuracy on 1-2 bits, (405B @ <2bit, 70B @ 2bit) - Lightweight Quantization Algorithm: only cost ~17 hours to quantize 405B Llama-3.1 - Agile Quantization Inference: low decode overhead, best throughput, and TTFT Inference support for VPTQ is released in the `vptq` library. Make sure to install it to run the models: ```bash pip install vptq ``` The library provides efficient kernels for NVIDIA/AMD GPU inference. To run VPTQ models simply load a model that has been quantized with VPTQ:
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#vptq
#vptq
.md
446_1
**Run Llama 3.1 70b on RTX4090 (24G @ ~2bits) in real time** ![Llama3 1-70b-prompt](https://github.com/user-attachments/assets/d8729aca-4e1d-4fe1-ac71-c14da4bdd97f) ```python from transformers import AutoTokenizer, AutoModelForCausalLM quantized_model = AutoModelForCausalLM.from_pretrained( "VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft") input_ids = tokenizer("hello, it's me", return_tensors="pt").to("cuda") out = model.generate(**input_ids, max_new_tokens=32, do_sample=False) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#inference-example
#inference-example
.md
446_2
VPTQ algorithm early-released at [VPTQ ](https://github.com/microsoft/VPTQ/tree/algorithm), and checkout the [tutorial](https://github.com/microsoft/VPTQ/blob/algorithm/algorithm.md).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#quantize-your-own-model
#quantize-your-own-model
.md
446_3
VPTQ achieves better accuracy and higher throughput with lower quantization overhead across models of different sizes. The following experimental results are for reference only; VPTQ can achieve better outcomes under reasonable parameters, especially in terms of model accuracy and inference speed. | Model | bitwidth | W2↓ | C4↓ | AvgQA↑ | tok/s↑ | mem(GB) | cost/h↓ | | ----------- | -------- | ---- | ---- | ------ | ------ | ------- | ------- | | LLaMA-2 7B | 2.02 | 6.13 | 8.07 | 58.2 | 39.9 | 2.28 | 2 | | | 2.26 | 5.95 | 7.87 | 59.4 | 35.7 | 2.48 | 3.1 | | LLaMA-2 13B | 2.02 | 5.32 | 7.15 | 62.4 | 26.9 | 4.03 | 3.2 | | | 2.18 | 5.28 | 7.04 | 63.1 | 18.5 | 4.31 | 3.6 | | LLaMA-2 70B | 2.07 | 3.93 | 5.72 | 68.6 | 9.7 | 19.54 | 19 | | | 2.11 | 3.92 | 5.71 | 68.7 | 9.7 | 20.01 | 19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#early-results-from-tech-report
#early-results-from-tech-report
.md
446_4
⚠️ The repository only provides a method of model quantization algorithm. ⚠️ The open-source community VPTQ-community provides models based on the technical report and quantization algorithm. **Quick Estimation of Model Bitwidth (Excluding Codebook Overhead)**: - **Model Naming Convention**: The model's name includes the **vector length** $v$, **codebook (lookup table) size**, and **residual codebook size**. For example, "Meta-Llama-3.1-70B-Instruct-v8-k65536-256-woft" is "Meta-Llama-3.1-70B-Instruct", where: - **Vector Length**: 8 - **Number of Centroids**: 65536 (2^16) - **Number of Residual Centroids**: 256 (2^8) - **Equivalent Bitwidth Calculation**: - **Index**: log2(65536) = 16 / 8 = 2 bits - **Residual Index**: log2(256) = 8 / 8 = 1 bit - **Total Bitwidth**: 2 + 1 = 3 bits - **Model Size Estimation**: 70B * 3 bits / 8 bits per Byte = 26.25 GB - **Note**: This estimate does not include the size of the codebook (lookup table), other parameter overheads, and the padding overhead for storing indices. For the detailed calculation method, please refer to **Tech Report Appendix C.2**. | Model Series | Collections | (Estimated) Bit per weight | | :--------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Llama 3.1 Nemotron 70B Instruct HF | [HF 🤗](https://huggingface.co/collections/VPTQ-community/vptq-llama-31-nemotron-70b-instruct-hf-without-finetune-671730b96f16208d0b3fe942) | [4 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-65536-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-0-woft) [1.875 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-16384-woft) [1.625 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-1024-woft) [1.5 bits](https://huggingface.co/VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-256-woft) | | Llama 3.1 8B Instruct | [HF 🤗](https://huggingface.co/collections/VPTQ-community/vptq-llama-31-8b-instruct-without-finetune-66f2b70b1d002ceedef02d2e) | [4 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-65536-woft) [3.5 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-4096-woft) [3 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-256-woft) [2.3 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-8B-Instruct-v12-k65536-4096-woft) | | Llama 3.1 70B Instruct | [HF 🤗](https://huggingface.co/collections/VPTQ-community/vptq-llama-31-70b-instruct-without-finetune-66f2bf454d3dd78dfee2ff11) | [4 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-256-woft) [2.25 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-4-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-0-woft) [1.93 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-32768-woft) [1.875 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k32768-0-woft) [1.75 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k16384-0-woft) | | Llama 3.1 405B Instruct | [HF 🤗](https://huggingface.co/collections/VPTQ-community/vptq-llama-31-405b-instruct-without-finetune-66f4413f9ba55e1a9e52cfb0) | [4 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k65536-256-woft) [2 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-65536-woft) [1.875 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k32768-32768-woft) [1.625 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-1024-woft) [1.5 bits (1)](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k4096-0-woft) [1.5 bits (2)](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-256-woft) [1.43 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-128-woft) [1.375 bits](https://huggingface.co/VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-64-woft) | | Mistral Large Instruct 2407 (123B) | [HF 🤗](https://huggingface.co/collections/VPTQ-community/vptq-mistral-large-instruct-2407-without-finetune-6711ebfb7faf85eed9cceb16) | [4 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-65536-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-0-woft) [1.875 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-16384-woft) [1.75 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-4096-woft) [1.625 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-1024-woft) [1.5 bits](https://huggingface.co/VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-256-woft) | | Qwen 2.5 7B Instruct | [HF 🤗](https://huggingface.co/collections/VPTQ-community/vptq-qwen-25-7b-instruct-without-finetune-66f3e9866d3167cc05ce954a) | [4 bits](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v8-k256-256-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-0-woft) [2 bits (3)](https://huggingface.co/VPTQ-community/Qwen2.5-7B-Instruct-v16-k65536-65536-woft) | | Qwen 2.5 14B Instruct | [HF 🤗](https://huggingface.co/collections/VPTQ-community/vptq-qwen-25-14b-instruct-without-finetune-66f827f83c7ffa7931b8376c) | [4 bits](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v8-k256-256-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-0-woft) [2 bits (3)](https://huggingface.co/VPTQ-community/Qwen2.5-14B-Instruct-v16-k65536-65536-woft) | | Qwen 2.5 32B Instruct | [HF 🤗](https://huggingface.co/collections/VPTQ-community/vptq-qwen-25-32b-instruct-without-finetune-66fe77173bf7d64139f0f613) | [4 bits](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-256-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v16-k65536-65536-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-0-woft) [2 bits (3)](https://huggingface.co/VPTQ-community/Qwen2.5-32B-Instruct-v8-k256-256-woft) | | Qwen 2.5 72B Instruct | [HF 🤗](https://huggingface.co/collections/VPTQ-community/vptq-qwen-25-72b-instruct-without-finetune-66f3bf1b3757dfa1ecb481c0) | [4 bits](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-65536-woft) [3 bits](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-256-woft) [2.38 bits](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k1024-512-woft) [2.25 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k512-512-woft) [2.25 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-4-woft) [2 bits (1)](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-0-woft) [2 bits (2)](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-65536-woft) [1.94 bits](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-32768-woft) | | Reproduced from the tech report | [HF 🤗](https://huggingface.co/collections/VPTQ-community/reproduced-vptq-tech-report-baseline-66fbf1dffe741cc9e93ecf04) | Results from the open source community for reference only, please use them responsibly. | | Hessian and Inverse Hessian Matrix | [HF 🤗](https://huggingface.co/collections/VPTQ-community/hessian-and-invhessian-checkpoints-66fd249a104850d17b23fd8b) | Collected from RedPajama-Data-1T-Sample, following [Quip#](https://github.com/Cornell-RelaxML/quip-sharp/blob/main/quantize_llama/hessian_offline_llama.py)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/vptq.md
https://huggingface.co/docs/transformers/en/quantization/vptq/#more-models-in-vptq-communityhttpshuggingfacecovptq-community
#more-models-in-vptq-communityhttpshuggingfacecovptq-community
.md
446_5
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/
.md
447_0
HIGGS is a 0-shot quantization algorithm that combines Hadamard preprocessing with MSE-Optimal quantization grids to achieve lower quantization error and SOTA performance. You can find more information in the paper [arxiv.org/abs/2411.17525](https://arxiv.org/abs/2411.17525). Runtime support for HIGGS is implemented through [FLUTE](https://arxiv.org/abs/2407.10960), and its [library](https://github.com/HanGuo97/flute).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#higgs
#higgs
.md
447_1
```python from transformers import AutoModelForCausalLM, AutoTokenizer, HiggsConfig model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it", quantization_config=HiggsConfig(bits=4), device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") tokenizer.decode(model.generate( **tokenizer("Hi,", return_tensors="pt").to(model.device), temperature=0.5, top_p=0.80, )[0]) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#quantization-example
#quantization-example
.md
447_2
Some pre-quantized models can be found in the [official collection](https://huggingface.co/collections/ISTA-DASLab/higgs-675308e432fd56b7f6dab94e) on Hugging Face Hub.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#pre-quantized-models
#pre-quantized-models
.md
447_3
**Architectures** Currently, FLUTE, and HIGGS by extension, **only support Llama 3 and 3.0 of 8B, 70B and 405B parameters, as well as Gemma-2 9B and 27B**. We're working on allowing to run more diverse models as well as allow arbitrary models by modifying the FLUTE compilation procedure. **torch.compile** HIGGS is fully compatible with `torch.compile`. Compiling `model.forward`, as described [here](../perf_torch_compile.md), here're the speedups it provides on RTX 4090 for `Llama-3.1-8B-Instruct` (forward passes/sec): | Batch Size | BF16 (With `torch.compile`) | HIGGS 4bit (No `torch.compile`) | HIGGS 4bit (With `torch.compile`) | |------------|-----------------------------|----------------------------------|-----------------------------------| | 1 | 59 | 41 | 124 | | 4 | 57 | 42 | 123 | | 16 | 56 | 41 | 120 | **Quantized training** Currently, HIGGS doesn't support quantized training (and backward passes in general). We're working on adding support for it.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/higgs.md
https://huggingface.co/docs/transformers/en/quantization/higgs/#current-limitations
#current-limitations
.md
447_4
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/
.md
448_0
> [!TIP] > Try AQLM on [Google Colab](https://colab.research.google.com/drive/1-xZmBRXT5Fm3Ghn4Mwa2KRypORXb855X?usp=sharing)! Additive Quantization of Language Models ([AQLM](https://arxiv.org/abs/2401.06118)) is a Large Language Models compression method. It quantizes multiple weights together and takes advantage of interdependencies between them. AQLM represents groups of 8-16 weights as a sum of multiple vector codes. Inference support for AQLM is realised in the `aqlm` library. Make sure to install it to run the models (note aqlm works only with python>=3.10): ```bash pip install aqlm[gpu,cpu] ``` The library provides efficient kernels for both GPU and CPU inference and training. The instructions on how to quantize models yourself, as well as all the relevant code can be found in the corresponding GitHub [repository](https://github.com/Vahe1994/AQLM). To run AQLM models simply load a model that has been quantized with AQLM: ```python from transformers import AutoTokenizer, AutoModelForCausalLM quantized_model = AutoModelForCausalLM.from_pretrained( "ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf") ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#aqlm
#aqlm
.md
448_1
Starting with version `aqlm 1.0.2`, AQLM supports Parameter-Efficient Fine-Tuning in a form of [LoRA](https://huggingface.co/docs/peft/package_reference/lora) integrated into the [PEFT](https://huggingface.co/blog/peft) library.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#peft
#peft
.md
448_2
AQLM quantization setups vary mainly on the number of codebooks used as well as codebook sizes in bits. The most popular setups, as well as inference kernels they support are: | Kernel | Number of codebooks | Codebook size, bits | Notation | Accuracy | Speedup | Fast GPU inference | Fast CPU inference | |---|---------------------|---------------------|----------|-------------|-------------|--------------------|--------------------| | Triton | K | N | KxN | - | Up to ~0.7x | ✅ | ❌ | | CUDA | 1 | 16 | 1x16 | Best | Up to ~1.3x | ✅ | ❌ | | CUDA | 2 | 8 | 2x8 | OK | Up to ~3.0x | ✅ | ❌ | | Numba | K | 8 | Kx8 | Good | Up to ~4.0x | ❌ | ✅ |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/quantization/aqlm.md
https://huggingface.co/docs/transformers/en/quantization/aqlm/#aqlm-configurations
#aqlm-configurations
.md
448_3
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/
.md
449_0
<Tip warning={true}> Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. </Tip> To learn more about agents and tools make sure to read the [introductory guide](../transformers_agents). This page contains the API docs for the underlying classes.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agents--tools
#agents--tools
.md
449_1
We provide two types of agents, based on the main [`Agent`] class: - [`CodeAgent`] acts in one shot, generating code to solve the task, then executes it at once. - [`ReactAgent`] acts step by step, each step consisting of one thought, then one tool call and execution. It has two classes: - [`ReactJsonAgent`] writes its tool calls in JSON. - [`ReactCodeAgent`] writes its tool calls in Python code.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agents
#agents
.md
449_2
Agent
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agent
#agent
.md
449_3
A class for an agent that solves the given task using a single block of code. It plans all its actions, then executes all in one shot.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#codeagent
#codeagent
.md
449_4
This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The action will be parsed from the LLM output: it consists in calls to tools from the toolbox, with arguments chosen by the LLM engine. This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The tool calls will be formulated by the LLM in JSON format, then parsed and executed. This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The tool calls will be formulated by the LLM in code format, then parsed and executed.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#react-agents
#react-agents
.md
449_5
ManagedAgent
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#managedagent
#managedagent
.md
449_6
Main function to quickly load a tool, be it on the Hub or in the Transformers library. <Tip warning={true}> Loading a tool means that you'll download the tool and execute it locally. ALWAYS inspect the tool you're downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt. </Tip> Args: task_or_repo_id (`str`): The task for which to load the tool or a repo ID of a tool on the Hub. Tasks implemented in Transformers are: - `"document_question_answering"` - `"image_question_answering"` - `"speech_to_text"` - `"text_to_speech"` - `"translation"` model_repo_id (`str`, *optional*): Use this argument to use a different model than the default one for the tool you selected. token (`str`, *optional*): The token to identify you on hf.co. If unset, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). kwargs (additional keyword arguments, *optional*): Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as `cache_dir`, `revision`, `subfolder`) will be used when downloading the files for your tool, and the others will be passed along to its init.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#loadtool
#loadtool
.md
449_7
Converts a function into an instance of a Tool subclass. Args: tool_function: Your function. Should have type hints for each input and a type hint for the output. Should also have a docstring description including an 'Args:' part where each argument is described.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#tool
#tool
.md
449_8
A base class for the functions used by the agent. Subclass this and implement the `__call__` method as well as the following class attributes: - **description** (`str`) -- A short description of what your tool does, the inputs it expects and the output(s) it will return. For instance 'This is a tool that downloads a file from a `url`. It takes the `url` as input, and returns the text contained in the file'. - **name** (`str`) -- A performative name that will be used for your tool in the prompt to the agent. For instance `"text-classifier"` or `"image_generator"`. - **inputs** (`Dict[str, Dict[str, Union[str, type]]]`) -- The dict of modalities expected for the inputs. It has one `type`key and a `description`key. This is used by `launch_gradio_demo` or to make a nice space from your tool, and also can be used in the generated description for your tool. - **output_type** (`type`) -- The type of the tool output. This is used by `launch_gradio_demo` or to make a nice space from your tool, and also can be used in the generated description for your tool. You can also override the method [`~Tool.setup`] if your tool as an expensive operation to perform before being usable (such as loading a model). [`~Tool.setup`] will be called the first time you use your tool, but not at instantiation.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#tool
#tool
.md
449_9
A base class for the functions used by the agent. Subclass this and implement the `__call__` method as well as the following class attributes: - **description** (`str`) -- A short description of what your tool does, the inputs it expects and the output(s) it will return. For instance 'This is a tool that downloads a file from a `url`. It takes the `url` as input, and returns the text contained in the file'. - **name** (`str`) -- A performative name that will be used for your tool in the prompt to the agent. For instance `"text-classifier"` or `"image_generator"`. - **inputs** (`Dict[str, Dict[str, Union[str, type]]]`) -- The dict of modalities expected for the inputs. It has one `type`key and a `description`key. This is used by `launch_gradio_demo` or to make a nice space from your tool, and also can be used in the generated description for your tool. - **output_type** (`type`) -- The type of the tool output. This is used by `launch_gradio_demo` or to make a nice space from your tool, and also can be used in the generated description for your tool. You can also override the method [`~Tool.setup`] if your tool as an expensive operation to perform before being usable (such as loading a model). [`~Tool.setup`] will be called the first time you use your tool, but not at instantiation. box
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#toolbox
#toolbox
.md
449_10
A [`Tool`] tailored towards Transformer models. On top of the class attributes of the base class [`Tool`], you will need to specify: - **model_class** (`type`) -- The class to use to load the model in this tool. - **default_checkpoint** (`str`) -- The default checkpoint that should be used when the user doesn't specify one. - **pre_processor_class** (`type`, *optional*, defaults to [`AutoProcessor`]) -- The class to use to load the pre-processor - **post_processor_class** (`type`, *optional*, defaults to [`AutoProcessor`]) -- The class to use to load the post-processor (when different from the pre-processor). Args: model (`str` or [`PreTrainedModel`], *optional*): The name of the checkpoint to use for the model, or the instantiated model. If unset, will default to the value of the class attribute `default_checkpoint`. pre_processor (`str` or `Any`, *optional*): The name of the checkpoint to use for the pre-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the value of `model` if unset. post_processor (`str` or `Any`, *optional*): The name of the checkpoint to use for the post-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the `pre_processor` if unset. device (`int`, `str` or `torch.device`, *optional*): The device on which to execute the model. Will default to any accelerator available (GPU, MPS etc...), the CPU otherwise. device_map (`str` or `dict`, *optional*): If passed along, will be used to instantiate the model. model_kwargs (`dict`, *optional*): Any keyword argument to send to the model instantiation. token (`str`, *optional*): The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). hub_kwargs (additional keyword arguments, *optional*): Any additional keyword argument to send to the methods that will load the data from the Hub.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#pipelinetool
#pipelinetool
.md
449_11
Launches a gradio demo for a tool. The corresponding tool class needs to properly implement the class attributes `inputs` and `output_type`. Args: tool_class (`type`): The class of the tool for which to launch the demo.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#launchgradiodemo
#launchgradiodemo
.md
449_12
Runs an agent with the given task and streams the messages from the agent as gradio ChatMessages.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#streamtogradio
#streamtogradio
.md
449_13
A base class for the functions used by the agent. Subclass this and implement the `__call__` method as well as the following class attributes: - **description** (`str`) -- A short description of what your tool does, the inputs it expects and the output(s) it will return. For instance 'This is a tool that downloads a file from a `url`. It takes the `url` as input, and returns the text contained in the file'. - **name** (`str`) -- A performative name that will be used for your tool in the prompt to the agent. For instance `"text-classifier"` or `"image_generator"`. - **inputs** (`Dict[str, Dict[str, Union[str, type]]]`) -- The dict of modalities expected for the inputs. It has one `type`key and a `description`key. This is used by `launch_gradio_demo` or to make a nice space from your tool, and also can be used in the generated description for your tool. - **output_type** (`type`) -- The type of the tool output. This is used by `launch_gradio_demo` or to make a nice space from your tool, and also can be used in the generated description for your tool. You can also override the method [`~Tool.setup`] if your tool as an expensive operation to perform before being usable (such as loading a model). [`~Tool.setup`] will be called the first time you use your tool, but not at instantiation. Collection
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#toolcollection
#toolcollection
.md
449_14
You're free to create and use your own engines to be usable by the Agents framework. These engines have the following specification: 1. Follow the [messages format](../chat_templating.md) for its input (`List[Dict[str, str]]`) and return a string. 2. Stop generating outputs *before* the sequences passed in the argument `stop_sequences`
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#engines
#engines
.md
449_15
For convenience, we have added a `TransformersEngine` that implements the points above, taking a pre-initialized `Pipeline` as input. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TransformersEngine >>> model_name = "HuggingFaceTB/SmolLM-135M-Instruct" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> model = AutoModelForCausalLM.from_pretrained(model_name) >>> pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) >>> engine = TransformersEngine(pipe) >>> engine([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]) "What a " ``` This engine uses a pre-initialized local text-generation pipeline.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#transformersengine
#transformersengine
.md
449_16
The `HfApiEngine` is an engine that wraps an [HF Inference API](https://huggingface.co/docs/api-inference/index) client for the execution of the LLM. ```python >>> from transformers import HfApiEngine >>> messages = [ ... {"role": "user", "content": "Hello, how are you?"}, ... {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, ... {"role": "user", "content": "No need to help, take it easy."}, ... ] >>> HfApiEngine()(messages, stop_sequences=["conversation"]) "That's very kind of you to say! It's always nice to have a relaxed " ``` A class to interact with Hugging Face's Inference API for language model interaction. This engine allows you to communicate with Hugging Face's models using the Inference API. It can be used in both serverless mode or with a dedicated endpoint, supporting features like stop sequences and grammar customization. Parameters: model (`str`, *optional*, defaults to `"meta-llama/Meta-Llama-3.1-8B-Instruct"`): The Hugging Face model ID to be used for inference. This can be a path or model identifier from the Hugging Face model hub. token (`str`, *optional*): Token used by the Hugging Face API for authentication. If not provided, the class will use the token stored in the Hugging Face CLI configuration. max_tokens (`int`, *optional*, defaults to 1500): The maximum number of tokens allowed in the output. timeout (`int`, *optional*, defaults to 120): Timeout for the API request, in seconds. Raises: ValueError: If the model name is not provided.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#hfapiengine
#hfapiengine
.md
449_17
Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to correctly render these returns in ipython (jupyter, colab, ipython notebooks, ...), we implement wrapper classes around these types. The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image object should still behave as a `PIL.Image`. These types have three specific purposes: - Calling `to_raw` on the type should return the underlying object - Calling `to_string` on the type should return the object as a string: that can be the string in case of an `AgentText` but will be the path of the serialized version of the object in other instances - Displaying it in an ipython kernel should display the object correctly
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agent-types
#agent-types
.md
449_18
Text type returned by the agent. Behaves as a string.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agenttext
#agenttext
.md
449_19
Image type returned by the agent. Behaves as a PIL.Image.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agentimage
#agentimage
.md
449_20
Audio type returned by the agent.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/agent.md
https://huggingface.co/docs/transformers/en/main_classes/agent/#agentaudio
#agentaudio
.md
449_21
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/feature_extractor.md
https://huggingface.co/docs/transformers/en/main_classes/feature_extractor/
.md
450_0
A feature extractor is in charge of preparing input features for audio or vision models. This includes feature extraction from sequences, e.g., pre-processing audio files to generate Log-Mel Spectrogram features, feature extraction from images, e.g., cropping image files, but also padding, normalization, and conversion to NumPy, PyTorch, and TensorFlow tensors.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/feature_extractor.md
https://huggingface.co/docs/transformers/en/main_classes/feature_extractor/#feature-extractor
#feature-extractor
.md
450_1
feature_extraction_utils.FeatureExtractionMixin This is a feature extraction mixin used to provide saving/loading functionality for sequential and image feature extractors. - from_pretrained - save_pretrained
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/feature_extractor.md
https://huggingface.co/docs/transformers/en/main_classes/feature_extractor/#featureextractionmixin
#featureextractionmixin
.md
450_2
This is a general feature extraction class for speech recognition. Args: feature_size (`int`): The feature dimension of the extracted features. sampling_rate (`int`): The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). padding_value (`float`): The value that is used to fill the padding values / vectors. - pad
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/feature_extractor.md
https://huggingface.co/docs/transformers/en/main_classes/feature_extractor/#sequencefeatureextractor
#sequencefeatureextractor
.md
450_3
Holds the output of the [`~SequenceFeatureExtractor.pad`] and feature extractor specific `__call__` methods. This class is derived from a python dictionary and can be used as a dictionary. Args: data (`dict`, *optional*): Dictionary of lists/arrays/tensors returned by the __call__/pad methods ('input_values', 'attention_mask', etc.). tensor_type (`Union[None, str, TensorType]`, *optional*): You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/feature_extractor.md
https://huggingface.co/docs/transformers/en/main_classes/feature_extractor/#batchfeature
#batchfeature
.md
450_4
image_utils.ImageFeatureExtractionMixin Mixin that contain utilities for preparing image features.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/feature_extractor.md
https://huggingface.co/docs/transformers/en/main_classes/feature_extractor/#imagefeatureextractionmixin
#imagefeatureextractionmixin
.md
450_5
<!--Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/executorch.md
https://huggingface.co/docs/transformers/en/main_classes/executorch/
.md
451_0
[`ExecuTorch`](https://github.com/pytorch/executorch) is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch ecosystem and supports the deployment of PyTorch models with a focus on portability, productivity, and performance. ExecuTorch introduces well defined entry points to perform model, device, and/or use-case specific optimizations such as backend delegation, user-defined compiler transformations, memory planning, and more. The first step in preparing a PyTorch model for execution on an edge device using ExecuTorch is to export the model. This is achieved through the use of a PyTorch API called [`torch.export`](https://pytorch.org/docs/stable/export.html).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/executorch.md
https://huggingface.co/docs/transformers/en/main_classes/executorch/#executorch
#executorch
.md
451_1
An integration point is being developed to ensure that 🤗 Transformers can be exported using `torch.export`. The goal of this integration is not only to enable export but also to ensure that the exported artifact can be further lowered and optimized to run efficiently in `ExecuTorch`, particularly for mobile and edge use cases. A wrapper module designed to make a `PreTrainedModel` exportable with `torch.export`, specifically for use with static caching. This module ensures that the exported model is compatible with further lowering and execution in `ExecuTorch`. Note: This class is specifically designed to support export process using `torch.export` in a way that ensures the model can be further lowered and run efficiently in `ExecuTorch`. - forward Convert a `PreTrainedModel` into an exportable module and export it using `torch.export`, ensuring the exported model is compatible with `ExecuTorch`. Args: model (`PreTrainedModel`): The pretrained model to be exported. example_input_ids (`torch.Tensor`): Example input token id used by `torch.export`. example_cache_position (`torch.Tensor`): Example current cache position used by `torch.export`. Returns: Exported program (`torch.export.ExportedProgram`): The exported program generated via `torch.export`.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/executorch.md
https://huggingface.co/docs/transformers/en/main_classes/executorch/#executorch-integration
#executorch-integration
.md
451_2
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md
https://huggingface.co/docs/transformers/en/main_classes/text_generation/
.md
452_0
Each framework has a generate method for text generation implemented in their respective `GenerationMixin` class: - PyTorch [`~generation.GenerationMixin.generate`] is implemented in [`~generation.GenerationMixin`]. - TensorFlow [`~generation.TFGenerationMixin.generate`] is implemented in [`~generation.TFGenerationMixin`]. - Flax/JAX [`~generation.FlaxGenerationMixin.generate`] is implemented in [`~generation.FlaxGenerationMixin`]. Regardless of your framework of choice, you can parameterize the generate method with a [`~generation.GenerationConfig`] class instance. Please refer to this class for the complete list of generation parameters, which control the behavior of the generation method. To learn how to inspect a model's generation configuration, what are the defaults, how to change the parameters ad hoc, and how to create and save a customized generation configuration, refer to the [text generation strategies guide](../generation_strategies). The guide also explains how to use related features, like token streaming.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md
https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generation
#generation
.md
452_1
generation.GenerationConfig Class that holds a configuration for a generation task. A `generate` call supports the following generation methods for text-decoder, text-to-text, speech-to-text, and vision-to-text models: - *greedy decoding* if `num_beams=1` and `do_sample=False` - *contrastive search* if `penalty_alpha>0.` and `top_k>1` - *multinomial sampling* if `num_beams=1` and `do_sample=True` - *beam-search decoding* if `num_beams>1` and `do_sample=False` - *beam-search multinomial sampling* if `num_beams>1` and `do_sample=True` - *diverse beam-search decoding* if `num_beams>1` and `num_beam_groups>1` - *constrained beam-search decoding* if `constraints!=None` or `force_words_ids!=None` - *assisted decoding* if `assistant_model` or `prompt_lookup_num_tokens` is passed to `.generate()` - *dola decoding* if `dola_layers` is passed to `.generate()` To learn more about decoding strategies refer to the [text generation strategies guide](../generation_strategies). <Tip> A large number of these flags control the logits or the stopping criteria of the generation. Make sure you check the [generate-related classes](https://huggingface.co/docs/transformers/internal/generation_utils) for a full description of the possible manipulations, as well as examples of their usage. </Tip> Arg: > Parameters that control the length of the output max_length (`int`, *optional*, defaults to 20): The maximum length the generated tokens can have. Corresponds to the length of the input prompt + `max_new_tokens`. Its effect is overridden by `max_new_tokens`, if also set. max_new_tokens (`int`, *optional*): The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. min_length (`int`, *optional*, defaults to 0): The minimum length of the sequence to be generated. Corresponds to the length of the input prompt + `min_new_tokens`. Its effect is overridden by `min_new_tokens`, if also set. min_new_tokens (`int`, *optional*): The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt. early_stopping (`bool` or `str`, *optional*, defaults to `False`): Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: `True`, where the generation stops as soon as there are `num_beams` complete candidates; `False`, where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; `"never"`, where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm). max_time (`float`, *optional*): The maximum amount of time you allow the computation to run for in seconds. generation will still finish the current pass after allocated time has been passed. stop_strings (`str or List[str]`, *optional*): A string or a list of strings that should terminate generation if the model outputs them. > Parameters that control the generation strategy used do_sample (`bool`, *optional*, defaults to `False`): Whether or not to use sampling ; use greedy decoding otherwise. num_beams (`int`, *optional*, defaults to 1): Number of beams for beam search. 1 means no beam search. num_beam_groups (`int`, *optional*, defaults to 1): Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams. [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details. penalty_alpha (`float`, *optional*): The values balance the model confidence and the degeneration penalty in contrastive search decoding. dola_layers (`str` or `List[int]`, *optional*): The layers to use for DoLa decoding. If `None`, DoLa decoding is not used. If a string, it must be one of "low" or "high", which means using the lower part or higher part of the model layers, respectively. "low" means the first half of the layers up to the first 20 layers, and "high" means the last half of the layers up to the last 20 layers. If a list of integers, it must contain the indices of the layers to use for candidate premature layers in DoLa. The 0-th layer is the word embedding layer of the model. Set to `'low'` to improve long-answer reasoning tasks, `'high'` to improve short-answer tasks. Check the [documentation](https://github.com/huggingface/transformers/blob/main/docs/source/en/generation_strategies.md) or [the paper](https://arxiv.org/abs/2309.03883) for more details. > Parameters that control the cache use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should use the past last key/values attentions (if applicable to the model) to speed up decoding. cache_implementation (`str`, *optional*, default to `None`): Name of the cache class that will be instantiated in `generate`, for faster decoding. Possible values are: - `"static"`: [`StaticCache`] - `"offloaded_static"`: [`OffloadedStaticCache`] - `"sliding_window"`: [`SlidingWindowCache`] - `"hybrid"`: [`HybridCache`] - `"mamba"`: [`MambaCache`] - `"quantized"`: [`QuantizedCache`] We support other cache types, but they must be manually instantiated and passed to `generate` through the `past_key_values` argument. See our [cache documentation](https://huggingface.co/docs/transformers/en/kv_cache) for further information. cache_config (`CacheConfig` or `dict`, *optional*, default to `None`): Arguments used in the key-value cache class can be passed in `cache_config`. Can be passed as a `Dict` and it will be converted to its repsective `CacheConfig` internally. Otherwise can be passed as a `CacheConfig` class matching the indicated `cache_implementation`. return_legacy_cache (`bool`, *optional*, default to `True`): Whether to return the legacy or new format of the cache when `DynamicCache` is used by default. > Parameters for manipulation of the model output logits temperature (`float`, *optional*, defaults to 1.0): The value used to module the next token probabilities. This value is set in a model's `generation_config.json` file. If it isn't set, the default value is 1.0 top_k (`int`, *optional*, defaults to 50): The number of highest probability vocabulary tokens to keep for top-k-filtering. This value is set in a model's `generation_config.json` file. If it isn't set, the default value is 50. top_p (`float`, *optional*, defaults to 1.0): If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or higher are kept for generation. This value is set in a model's `generation_config.json` file. If it isn't set, the default value is 1.0 min_p (`float`, *optional*): Minimum token probability, which will be scaled by the probability of the most likely token. It must be a value between 0 and 1. Typical values are in the 0.01-0.2 range, comparably selective as setting `top_p` in the 0.99-0.8 range (use the opposite of normal `top_p` values). typical_p (`float`, *optional*, defaults to 1.0): Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to `typical_p` or higher are kept for generation. See [this paper](https://arxiv.org/pdf/2202.00666.pdf) for more details. epsilon_cutoff (`float`, *optional*, defaults to 0.0): If set to float strictly between 0 and 1, only tokens with a conditional probability greater than `epsilon_cutoff` will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on the size of the model. See [Truncation Sampling as Language Model Desmoothing](https://arxiv.org/abs/2210.15191) for more details. eta_cutoff (`float`, *optional*, defaults to 0.0): Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly between 0 and 1, a token is only considered if it is greater than either `eta_cutoff` or `sqrt(eta_cutoff) * exp(-entropy(softmax(next_token_logits)))`. The latter term is intuitively the expected next token probability, scaled by `sqrt(eta_cutoff)`. In the paper, suggested values range from 3e-4 to 2e-3, depending on the size of the model. See [Truncation Sampling as Language Model Desmoothing](https://arxiv.org/abs/2210.15191) for more details. diversity_penalty (`float`, *optional*, defaults to 0.0): This value is subtracted from a beam's score if it generates a token same as any beam from other group at a particular time. Note that `diversity_penalty` is only effective if `group beam search` is enabled. repetition_penalty (`float`, *optional*, defaults to 1.0): The parameter for repetition penalty. 1.0 means no penalty. See [this paper](https://arxiv.org/pdf/1909.05858.pdf) for more details. encoder_repetition_penalty (`float`, *optional*, defaults to 1.0): The paramater for encoder_repetition_penalty. An exponential penalty on sequences that are not in the original input. 1.0 means no penalty. length_penalty (`float`, *optional*, defaults to 1.0): Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while `length_penalty` < 0.0 encourages shorter sequences. no_repeat_ngram_size (`int`, *optional*, defaults to 0): If set to int > 0, all ngrams of that size can only occur once. bad_words_ids (`List[List[int]]`, *optional*): List of list of token ids that are not allowed to be generated. Check [`~generation.NoBadWordsLogitsProcessor`] for further documentation and examples. force_words_ids (`List[List[int]]` or `List[List[List[int]]]`, *optional*): List of token ids that must be generated. If given a `List[List[int]]`, this is treated as a simple list of words that must be included, the opposite to `bad_words_ids`. If given `List[List[List[int]]]`, this triggers a [disjunctive constraint](https://github.com/huggingface/transformers/issues/14081), where one can allow different forms of each word. renormalize_logits (`bool`, *optional*, defaults to `False`): Whether to renormalize the logits after applying all the logits processors (including the custom ones). It's highly recommended to set this flag to `True` as the search algorithms suppose the score logits are normalized but some logit processors break the normalization. constraints (`List[Constraint]`, *optional*): Custom constraints that can be added to the generation to ensure that the output will contain the use of certain tokens as defined by `Constraint` objects, in the most sensible way possible. forced_bos_token_id (`int`, *optional*, defaults to `model.config.forced_bos_token_id`): The id of the token to force as the first generated token after the `decoder_start_token_id`. Useful for multilingual models like [mBART](../model_doc/mbart) where the first generated token needs to be the target language token. forced_eos_token_id (`int` or List[int]`, *optional*, defaults to `model.config.forced_eos_token_id`): The id of the token to force as the last generated token when `max_length` is reached. Optionally, use a list to set multiple *end-of-sequence* tokens. remove_invalid_values (`bool`, *optional*, defaults to `model.config.remove_invalid_values`): Whether to remove possible *nan* and *inf* outputs of the model to prevent the generation method to crash. Note that using `remove_invalid_values` can slow down generation. exponential_decay_length_penalty (`tuple(int, float)`, *optional*): This Tuple adds an exponentially increasing length penalty, after a certain amount of tokens have been generated. The tuple shall consist of: `(start_index, decay_factor)` where `start_index` indicates where penalty starts and `decay_factor` represents the factor of exponential decay suppress_tokens (`List[int]`, *optional*): A list of tokens that will be suppressed at generation. The `SupressTokens` logit processor will set their log probs to `-inf` so that they are not sampled. begin_suppress_tokens (`List[int]`, *optional*): A list of tokens that will be suppressed at the beginning of the generation. The `SupressBeginTokens` logit processor will set their log probs to `-inf` so that they are not sampled. forced_decoder_ids (`List[List[int]]`, *optional*): A list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. For example, `[[1, 123]]` means the second generated token will always be a token of index 123. sequence_bias (`Dict[Tuple[int], float]`, *optional*)): Dictionary that maps a sequence of tokens to its bias term. Positive biases increase the odds of the sequence being selected, while negative biases do the opposite. Check [`~generation.SequenceBiasLogitsProcessor`] for further documentation and examples. token_healing (`bool`, *optional*, defaults to `False`): Heal tail tokens of prompts by replacing them with their appropriate extensions. This enhances the quality of completions for prompts affected by greedy tokenization bias. guidance_scale (`float`, *optional*): The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. low_memory (`bool`, *optional*): Switch to sequential beam search and sequential topk for contrastive search to reduce peak memory. Used with beam search and contrastive search. watermarking_config (`BaseWatermarkingConfig` or `dict`, *optional*): Arguments used to watermark the model outputs by adding a small bias to randomly selected set of "green" tokens. See the docs of [`SynthIDTextWatermarkingConfig`] and [`WatermarkingConfig`] for more details. If passed as `Dict`, it will be converted to a `WatermarkingConfig` internally. > Parameters that define the output variables of generate num_return_sequences (`int`, *optional*, defaults to 1): The number of independently computed returned sequences for each element in the batch. output_attentions (`bool`, *optional*, defaults to `False`): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more details. output_hidden_states (`bool`, *optional*, defaults to `False`): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more details. output_scores (`bool`, *optional*, defaults to `False`): Whether or not to return the prediction scores. See `scores` under returned tensors for more details. output_logits (`bool`, *optional*): Whether or not to return the unprocessed prediction logit scores. See `logits` under returned tensors for more details. return_dict_in_generate (`bool`, *optional*, defaults to `False`): Whether or not to return a [`~utils.ModelOutput`], as opposed to returning exclusively the generated sequence. This flag must be set to `True` to return the generation cache (when `use_cache` is `True`) or optional outputs (see flags starting with `output_`) > Special tokens that can be used at generation time pad_token_id (`int`, *optional*): The id of the *padding* token. bos_token_id (`int`, *optional*): The id of the *beginning-of-sequence* token. eos_token_id (`Union[int, List[int]]`, *optional*): The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens. > Generation parameters exclusive to encoder-decoder models encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0): If set to int > 0, all ngrams of that size that occur in the `encoder_input_ids` cannot occur in the `decoder_input_ids`. decoder_start_token_id (`int` or `List[int]`, *optional*): If an encoder-decoder model starts decoding with a different token than *bos*, the id of that token or a list of length `batch_size`. Indicating a list enables different start ids for each element in the batch (e.g. multilingual models with different target languages in one batch) > Generation parameters exclusive to assistant generation is_assistant (`bool`, *optional*, defaults to `False`): Whether the model is an assistant (draft) model. num_assistant_tokens (`int`, *optional*, defaults to 20): Defines the number of _speculative tokens_ that shall be generated by the assistant model before being checked by the target model at each iteration. Higher values for `num_assistant_tokens` make the generation more _speculative_ : If the assistant model is performant larger speed-ups can be reached, if the assistant model requires lots of corrections, lower speed-ups are reached. num_assistant_tokens_schedule (`str`, *optional*, defaults to `"constant"`): Defines the schedule at which max assistant tokens shall be changed during inference. - `"heuristic"`: When all speculative tokens are correct, increase `num_assistant_tokens` by 2 else reduce by 1. `num_assistant_tokens` value is persistent over multiple generation calls with the same assistant model. - `"heuristic_transient"`: Same as `"heuristic"` but `num_assistant_tokens` is reset to its initial value after each generation call. - `"constant"`: `num_assistant_tokens` stays unchanged during generation assistant_confidence_threshold (`float`, *optional*, defaults to 0.4): The confidence threshold for the assistant model. If the assistant model's confidence in its prediction for the current token is lower than this threshold, the assistant model stops the current token generation iteration, even if the number of _speculative tokens_ (defined by `num_assistant_tokens`) is not yet reached. The assistant's confidence threshold is adjusted throughout the speculative iterations to reduce the number of unnecessary draft and target forward passes, biased towards avoiding false negatives. `assistant_confidence_threshold` value is persistent over multiple generation calls with the same assistant model. It is an unsupervised version of the dynamic speculation lookahead from Dynamic Speculation Lookahead Accelerates Speculative Decoding of Large Language Models <https://arxiv.org/abs/2405.04304>. prompt_lookup_num_tokens (`int`, *optional*): The number of tokens to be output as candidate tokens. max_matching_ngram_size (`int`, *optional*): The maximum ngram size to be considered for matching in the prompt. Default to 2 if not provided. assistant_early_exit(`int`, *optional*): If set to a positive integer, early exit of the model will be used as an assistant. Can only be used with models that support early exit (i.e. models where logits from intermediate layers can be interpreted by the LM head). assistant_lookbehind(`int`, *optional*, defaults to 10): If set to a positive integer, the re-encodeing process will additionally consider the last `assistant_lookbehind` assistant tokens to correctly align tokens. Can only be used with different tokenizers in speculative decoding. See this [blog](https://huggingface.co/blog/universal_assisted_generation) for more details. target_lookbehind(`int`, *optional*, defaults to 10): If set to a positive integer, the re-encodeing process will additionally consider the last `target_lookbehind` target tokens to correctly align tokens. Can only be used with different tokenizers in speculative decoding. See this [blog](https://huggingface.co/blog/universal_assisted_generation) for more details. > Parameters related to performances and compilation compile_config (CompileConfig, *optional*): If using a static cache, this controls how `generate` will `compile` the forward pass for performance gains. > Wild card generation_kwargs: Additional generation kwargs will be forwarded to the `generate` function of the model. Kwargs that are not present in `generate`'s signature will be used in the model forward pass. - from_pretrained - from_model_config - save_pretrained - update - validate - get_generation_mode
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md
https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationconfig
#generationconfig
.md
452_2
A class containing all functions for auto-regressive text generation, to be used as a mixin in [`PreTrainedModel`]. The class exposes [`~generation.GenerationMixin.generate`], which can be used for: - *greedy decoding* if `num_beams=1` and `do_sample=False` - *contrastive search* if `penalty_alpha>0` and `top_k>1` - *multinomial sampling* if `num_beams=1` and `do_sample=True` - *beam-search decoding* if `num_beams>1` and `do_sample=False` - *beam-search multinomial sampling* if `num_beams>1` and `do_sample=True` - *diverse beam-search decoding* if `num_beams>1` and `num_beam_groups>1` - *constrained beam-search decoding* if `constraints!=None` or `force_words_ids!=None` - *assisted decoding* if `assistant_model` or `prompt_lookup_num_tokens` is passed to `.generate()` To learn more about decoding strategies refer to the [text generation strategies guide](../generation_strategies). - generate - compute_transition_scores
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md
https://huggingface.co/docs/transformers/en/main_classes/text_generation/#generationmixin
#generationmixin
.md
452_3
TFGenerationMixin - generate - compute_transition_scores
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md
https://huggingface.co/docs/transformers/en/main_classes/text_generation/#tfgenerationmixin
#tfgenerationmixin
.md
452_4
FlaxGenerationMixin - generate
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/text_generation.md
https://huggingface.co/docs/transformers/en/main_classes/text_generation/#flaxgenerationmixin
#flaxgenerationmixin
.md
452_5
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md
https://huggingface.co/docs/transformers/en/main_classes/tokenizer/
.md
453_0
A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the Rust library [🤗 Tokenizers](https://github.com/huggingface/tokenizers). The "Fast" implementations allows: 1. a significant speed-up in particular when doing batched tokenization and 2. additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token). The base classes [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and "Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace's AWS S3 repository). They both rely on [`~tokenization_utils_base.PreTrainedTokenizerBase`] that contains the common methods, and [`~tokenization_utils_base.SpecialTokensMixin`]. [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] thus implement the main methods for using all the tokenizers: - Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e., tokenizing and converting to integers). - Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece...). - Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization. [`BatchEncoding`] holds the output of the [`~tokenization_utils_base.PreTrainedTokenizerBase`]'s encoding methods (`__call__`, `encode_plus` and `batch_encode_plus`) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by these methods (`input_ids`, `attention_mask`...). When the tokenizer is a "Fast" tokenizer (i.e., backed by HuggingFace [tokenizers library](https://github.com/huggingface/tokenizers)), this class provides in addition several advanced alignment methods which can be used to map between the original string (character and words) and the token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding to a given token).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md
https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#tokenizer
#tokenizer
.md
453_1
Apart from that each tokenizer can be a "multimodal" tokenizer which means that the tokenizer will hold all relevant special tokens as part of tokenizer attributes for easier access. For example, if the tokenizer is loaded from a vision-language model like LLaVA, you will be able to access `tokenizer.image_token_id` to obtain the special image token used as a placeholder. To enable extra special tokens for any type of tokenizer, you have to add the following lines and save the tokenizer. Extra special tokens do not have to be modality related and can ne anything that the model often needs access to. In the below code, tokenizer at `output_dir` will have direct access to three more special tokens. ```python vision_tokenizer = AutoTokenizer.from_pretrained( "llava-hf/llava-1.5-7b-hf", extra_special_tokens={"image_token": "<image>", "boi_token": "<image_start>", "eoi_token": "<image_end>"} ) print(vision_tokenizer.image_token, vision_tokenizer.image_token_id) ("<image>", 32000) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md
https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#multimodal-tokenizer
#multimodal-tokenizer
.md
453_2
Base class for all slow tokenizers. Inherits from [`~tokenization_utils_base.PreTrainedTokenizerBase`]. Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary. This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece...). Class attributes (overridden by derived classes) - **vocab_files_names** (`Dict[str, str]`) -- A dictionary with, as keys, the `__init__` keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - **pretrained_vocab_files_map** (`Dict[str, Dict[str, str]]`) -- A dictionary of dictionaries, with the high-level keys being the `__init__` keyword name of each vocabulary file required by the model, the low-level being the `short-cut-names` of the pretrained models with, as associated values, the `url` to the associated pretrained vocabulary file. - **model_input_names** (`List[str]`) -- A list of inputs expected in the forward pass of the model. - **padding_side** (`str`) -- The default value for the side on which the model should have padding applied. Should be `'right'` or `'left'`. - **truncation_side** (`str`) -- The default value for the side on which the model should have truncation applied. Should be `'right'` or `'left'`. Args: model_max_length (`int`, *optional*): The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with [`~tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`], this will be set to the value stored for the associated model in `max_model_input_sizes` (see above). If no value is provided, will default to VERY_LARGE_INTEGER (`int(1e30)`). padding_side (`str`, *optional*): The side on which the model should have padding applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. truncation_side (`str`, *optional*): The side on which the model should have truncation applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. chat_template (`str`, *optional*): A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description. model_input_names (`List[string]`, *optional*): The list of inputs accepted by the forward pass of the model (like `"token_type_ids"` or `"attention_mask"`). Default value is picked from the class attribute of the same name. bos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the beginning of a sentence. Will be associated to `self.bos_token` and `self.bos_token_id`. eos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the end of a sentence. Will be associated to `self.eos_token` and `self.eos_token_id`. unk_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing an out-of-vocabulary token. Will be associated to `self.unk_token` and `self.unk_token_id`. sep_token (`str` or `tokenizers.AddedToken`, *optional*): A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to `self.sep_token` and `self.sep_token_id`. pad_token (`str` or `tokenizers.AddedToken`, *optional*): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to `self.pad_token` and `self.pad_token_id`. cls_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the class of the input (used by BERT for instance). Will be associated to `self.cls_token` and `self.cls_token_id`. mask_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to `self.mask_token` and `self.mask_token_id`. additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*): A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with `skip_special_tokens` is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process. split_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the special tokens should be split during the tokenization process. Passing will affect the internal state of the tokenizer. The default behavior is to not split special tokens. This means that if `<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if `split_special_tokens=True`, then `tokenizer.tokenize("<s>")` will be give `['<','s', '>']`. - __call__ - add_tokens - add_special_tokens - apply_chat_template - batch_decode - decode - encode - push_to_hub - all
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md
https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizer
#pretrainedtokenizer
.md
453_3
The [`PreTrainedTokenizerFast`] depend on the [tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the 🤗 tokenizers library can be loaded very simply into 🤗 transformers. Take a look at the [Using tokenizers from 🤗 tokenizers](../fast_tokenizers) page to understand how this is done. Base class for all slow tokenizers. Inherits from [`~tokenization_utils_base.PreTrainedTokenizerBase`]. Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary. This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece...). Class attributes (overridden by derived classes) - **vocab_files_names** (`Dict[str, str]`) -- A dictionary with, as keys, the `__init__` keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - **pretrained_vocab_files_map** (`Dict[str, Dict[str, str]]`) -- A dictionary of dictionaries, with the high-level keys being the `__init__` keyword name of each vocabulary file required by the model, the low-level being the `short-cut-names` of the pretrained models with, as associated values, the `url` to the associated pretrained vocabulary file. - **model_input_names** (`List[str]`) -- A list of inputs expected in the forward pass of the model. - **padding_side** (`str`) -- The default value for the side on which the model should have padding applied. Should be `'right'` or `'left'`. - **truncation_side** (`str`) -- The default value for the side on which the model should have truncation applied. Should be `'right'` or `'left'`. Args: model_max_length (`int`, *optional*): The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with [`~tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`], this will be set to the value stored for the associated model in `max_model_input_sizes` (see above). If no value is provided, will default to VERY_LARGE_INTEGER (`int(1e30)`). padding_side (`str`, *optional*): The side on which the model should have padding applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. truncation_side (`str`, *optional*): The side on which the model should have truncation applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. chat_template (`str`, *optional*): A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description. model_input_names (`List[string]`, *optional*): The list of inputs accepted by the forward pass of the model (like `"token_type_ids"` or `"attention_mask"`). Default value is picked from the class attribute of the same name. bos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the beginning of a sentence. Will be associated to `self.bos_token` and `self.bos_token_id`. eos_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the end of a sentence. Will be associated to `self.eos_token` and `self.eos_token_id`. unk_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing an out-of-vocabulary token. Will be associated to `self.unk_token` and `self.unk_token_id`. sep_token (`str` or `tokenizers.AddedToken`, *optional*): A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to `self.sep_token` and `self.sep_token_id`. pad_token (`str` or `tokenizers.AddedToken`, *optional*): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to `self.pad_token` and `self.pad_token_id`. cls_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing the class of the input (used by BERT for instance). Will be associated to `self.cls_token` and `self.cls_token_id`. mask_token (`str` or `tokenizers.AddedToken`, *optional*): A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to `self.mask_token` and `self.mask_token_id`. additional_special_tokens (tuple or list of `str` or `tokenizers.AddedToken`, *optional*): A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with `skip_special_tokens` is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process. split_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the special tokens should be split during the tokenization process. Passing will affect the internal state of the tokenizer. The default behavior is to not split special tokens. This means that if `<s>` is the `bos_token`, then `tokenizer.tokenize("<s>") = ['<s>`]. Otherwise, if `split_special_tokens=True`, then `tokenizer.tokenize("<s>")` will be give `['<','s', '>']`. Fast - __call__ - add_tokens - add_special_tokens - apply_chat_template - batch_decode - decode - encode - push_to_hub - all
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md
https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#pretrainedtokenizerfast
#pretrainedtokenizerfast
.md
453_4
Holds the output of the [`~tokenization_utils_base.PreTrainedTokenizerBase.__call__`], [`~tokenization_utils_base.PreTrainedTokenizerBase.encode_plus`] and [`~tokenization_utils_base.PreTrainedTokenizerBase.batch_encode_plus`] methods (tokens, attention_masks, etc). This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes utility methods to map from word/character space to token space. Args: data (`dict`, *optional*): Dictionary of lists/arrays/tensors returned by the `__call__`/`encode_plus`/`batch_encode_plus` methods ('input_ids', 'attention_mask', etc.). encoding (`tokenizers.Encoding` or `Sequence[tokenizers.Encoding]`, *optional*): If the tokenizer is a fast tokenizer which outputs additional information like mapping from word/character space to token space the `tokenizers.Encoding` instance or list of instance (for batches) hold this information. tensor_type (`Union[None, str, TensorType]`, *optional*): You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization. prepend_batch_axis (`bool`, *optional*, defaults to `False`): Whether or not to add a batch axis when converting to tensors (see `tensor_type` above). Note that this parameter has an effect if the parameter `tensor_type` is set, *otherwise has no effect*. n_sequences (`Optional[int]`, *optional*): You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/tokenizer.md
https://huggingface.co/docs/transformers/en/main_classes/tokenizer/#batchencoding
#batchencoding
.md
453_5
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md
https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/
.md
454_0
The `.optimization` module provides: - an optimizer with weight decay fixed that can be used to fine-tuned models, and - several schedules in the form of schedule objects that inherit from `_LRSchedule`: - a gradient accumulation class to accumulate the gradients of multiple batches
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md
https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#optimization
#optimization
.md
454_1
Implements Adam algorithm with weight decay fix as introduced in [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101). Parameters: params (`Iterable[nn.parameter.Parameter]`): Iterable of parameters to optimize or dictionaries defining parameter groups. lr (`float`, *optional*, defaults to 0.001): The learning rate to use. betas (`Tuple[float,float]`, *optional*, defaults to `(0.9, 0.999)`): Adam's betas parameters (b1, b2). eps (`float`, *optional*, defaults to 1e-06): Adam's epsilon for numerical stability. weight_decay (`float`, *optional*, defaults to 0.0): Decoupled weight decay to apply. correct_bias (`bool`, *optional*, defaults to `True`): Whether or not to correct bias in Adam (for instance, in Bert TF repository they use `False`). no_deprecation_warning (`bool`, *optional*, defaults to `False`): A flag used to disable the deprecation warning (set to `True` to disable the warning).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md
https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adamw-pytorch
#adamw-pytorch
.md
454_2
AdaFactor pytorch implementation can be used as a drop in replacement for Adam original fairseq code: https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py Paper: *Adafactor: Adaptive Learning Rates with Sublinear Memory Cost* https://arxiv.org/abs/1804.04235 Note that this optimizer internally adjusts the learning rate depending on the `scale_parameter`, `relative_step` and `warmup_init` options. To use a manual (external) learning rate schedule you should set `scale_parameter=False` and `relative_step=False`. Arguments: params (`Iterable[nn.parameter.Parameter]`): Iterable of parameters to optimize or dictionaries defining parameter groups. lr (`float`, *optional*): The external learning rate. eps (`Tuple[float, float]`, *optional*, defaults to `(1e-30, 0.001)`): Regularization constants for square gradient and parameter scale respectively clip_threshold (`float`, *optional*, defaults to 1.0): Threshold of root mean square of final gradient update decay_rate (`float`, *optional*, defaults to -0.8): Coefficient used to compute running averages of square beta1 (`float`, *optional*): Coefficient used for computing running averages of gradient weight_decay (`float`, *optional*, defaults to 0.0): Weight decay (L2 penalty) scale_parameter (`bool`, *optional*, defaults to `True`): If True, learning rate is scaled by root mean square relative_step (`bool`, *optional*, defaults to `True`): If True, time-dependent learning rate is computed instead of external learning rate warmup_init (`bool`, *optional*, defaults to `False`): Time-dependent learning rate computation depends on whether warm-up initialization is being used This implementation handles low-precision (FP16, bfloat) values, but we have not thoroughly tested. Recommended T5 finetuning settings (https://discuss.huggingface.co/t/t5-finetuning-tips/684/3): - Training without LR warmup or clip_threshold is not recommended. - use scheduled LR warm-up to fixed LR - use clip_threshold=1.0 (https://arxiv.org/abs/1804.04235) - Disable relative updates - Use scale_parameter=False - Additional optimizer operations like gradient clipping should not be used alongside Adafactor Example: ```python Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3) ``` Others reported the following combination to work well: ```python Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) ``` When using `lr=None` with [`Trainer`] you will most likely need to use [`~optimization.AdafactorSchedule`] scheduler as following: ```python from transformers.optimization import Adafactor, AdafactorSchedule optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) lr_scheduler = AdafactorSchedule(optimizer) trainer = Trainer(..., optimizers=(optimizer, lr_scheduler)) ``` Usage: ```python # replace AdamW with Adafactor optimizer = Adafactor( model.parameters(), lr=1e-3, eps=(1e-30, 1e-3), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=False, scale_parameter=False, warmup_init=False, ) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md
https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adafactor-pytorch
#adafactor-pytorch
.md
454_3
Implements Adam algorithm with weight decay fix as introduced in [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101). Parameters: params (`Iterable[nn.parameter.Parameter]`): Iterable of parameters to optimize or dictionaries defining parameter groups. lr (`float`, *optional*, defaults to 0.001): The learning rate to use. betas (`Tuple[float,float]`, *optional*, defaults to `(0.9, 0.999)`): Adam's betas parameters (b1, b2). eps (`float`, *optional*, defaults to 1e-06): Adam's epsilon for numerical stability. weight_decay (`float`, *optional*, defaults to 0.0): Decoupled weight decay to apply. correct_bias (`bool`, *optional*, defaults to `True`): Whether or not to correct bias in Adam (for instance, in Bert TF repository they use `False`). no_deprecation_warning (`bool`, *optional*, defaults to `False`): A flag used to disable the deprecation warning (set to `True` to disable the warning). eightDecay create_optimizer
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md
https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#adamweightdecay-tensorflow
#adamweightdecay-tensorflow
.md
454_4
Scheduler names for the parameter `lr_scheduler_type` in [`TrainingArguments`]. By default, it uses "linear". Internally, this retrieves `get_linear_schedule_with_warmup` scheduler from [`Trainer`]. Scheduler types: - "linear" = get_linear_schedule_with_warmup - "cosine" = get_cosine_schedule_with_warmup - "cosine_with_restarts" = get_cosine_with_hard_restarts_schedule_with_warmup - "polynomial" = get_polynomial_decay_schedule_with_warmup - "constant" = get_constant_schedule - "constant_with_warmup" = get_constant_schedule_with_warmup - "inverse_sqrt" = get_inverse_sqrt_schedule - "reduce_lr_on_plateau" = get_reduce_on_plateau_schedule - "cosine_with_min_lr" = get_cosine_with_min_lr_schedule_with_warmup - "warmup_stable_decay" = get_wsd_schedule Unified API to get any scheduler from its name. Args: name (`str` or `SchedulerType`): The name of the scheduler to use. optimizer (`torch.optim.Optimizer`): The optimizer that will be used during training. num_warmup_steps (`int`, *optional*): The number of warmup steps to do. This is not required by all schedulers (hence the argument being optional), the function will raise an error if it's unset and the scheduler type requires it. num_training_steps (`int``, *optional*): The number of training steps to do. This is not required by all schedulers (hence the argument being optional), the function will raise an error if it's unset and the scheduler type requires it. scheduler_specific_kwargs (`dict`, *optional*): Extra parameters for schedulers such as cosine with restarts. Mismatched scheduler types and scheduler parameters will cause the scheduler function to raise a TypeError. Create a schedule with a constant learning rate, using the learning rate set in optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. Create a schedule with a constant learning rate, using the learning rate set in optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. _with_warmup <img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_constant_schedule.png"/> Create a schedule with a learning rate that decreases following the values of the cosine function between the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_training_steps (`int`): The total number of training steps. num_cycles (`float`, *optional*, defaults to 0.5): The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0 following a half-cosine). last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. <img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_schedule.png"/> Create a schedule with a learning rate that decreases following the values of the cosine function between the initial lr set in the optimizer to 0, with several hard restarts, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_training_steps (`int`): The total number of training steps. num_cycles (`int`, *optional*, defaults to 1): The number of hard restarts to use. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. <img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_hard_restarts_schedule.png"/> Create a schedule with a learning rate that decreases linearly from the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_training_steps (`int`): The total number of training steps. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. <img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_linear_schedule.png"/> Create a schedule with a learning rate that decreases as a polynomial decay from the initial lr set in the optimizer to end lr defined by *lr_end*, after a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_training_steps (`int`): The total number of training steps. lr_end (`float`, *optional*, defaults to 1e-7): The end LR. power (`float`, *optional*, defaults to 1.0): Power factor. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Note: *power* defaults to 1.0 as in the fairseq implementation, which in turn is based on the original BERT implementation at https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py#L37 Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. Create a schedule with an inverse square-root learning rate, from the initial lr set in the optimizer, after a warmup period which increases lr linearly from 0 to the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. timescale (`int`, *optional*, defaults to `num_warmup_steps`): Time scale. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. Create a schedule with a learning rate that has three stages: 1. linear increase from 0 to initial lr. 2. constant lr (equal to initial lr). 3. decrease following the values of the cosine function between the initial lr set in the optimizer to a fraction of initial lr. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_stable_steps (`int`): The number of steps for the stable phase. num_decay_steps (`int`): The number of steps for the cosine annealing phase. min_lr_ratio (`float`, *optional*, defaults to 0): The minimum learning rate as a ratio of the initial learning rate. num_cycles (`float`, *optional*, defaults to 0.5): The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0 following a half-cosine). last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md
https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#learning-rate-schedules-pytorch
#learning-rate-schedules-pytorch
.md
454_5
WarmUp
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md
https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#warmup-tensorflow
#warmup-tensorflow
.md
454_6
GradientAccumulator
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/optimizer_schedules.md
https://huggingface.co/docs/transformers/en/main_classes/optimizer_schedules/#gradientaccumulator-tensorflow
#gradientaccumulator-tensorflow
.md
454_7
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md
https://huggingface.co/docs/transformers/en/main_classes/model/
.md
455_0
The base classes [`PreTrainedModel`], [`TFPreTrainedModel`], and [`FlaxPreTrainedModel`] implement the common methods for loading/saving a model either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS S3 repository). [`PreTrainedModel`] and [`TFPreTrainedModel`] also implement a few methods which are common among all the models to: - resize the input token embeddings when new tokens are added to the vocabulary - prune the attention heads of the model. The other methods that are common to each model are defined in [`~modeling_utils.ModuleUtilsMixin`] (for the PyTorch models) and [`~modeling_tf_utils.TFModuleUtilsMixin`] (for the TensorFlow models) or for text generation, [`~generation.GenerationMixin`] (for the PyTorch models), [`~generation.TFGenerationMixin`] (for the TensorFlow models) and [`~generation.FlaxGenerationMixin`] (for the Flax/JAX models).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md
https://huggingface.co/docs/transformers/en/main_classes/model/#models
#models
.md
455_1
Base class for all models. [`PreTrainedModel`] takes care of storing the configuration of the models and handles methods for loading, downloading and saving models as well as a few methods common to all models to: - resize the input embeddings, - prune heads in the self-attention heads. Class attributes (overridden by derived classes): - **config_class** ([`PretrainedConfig`]) -- A subclass of [`PretrainedConfig`] to use as configuration class for this model architecture. - **load_tf_weights** (`Callable`) -- A python *method* for loading a TensorFlow checkpoint in a PyTorch model, taking as arguments: - **model** ([`PreTrainedModel`]) -- An instance of the model on which to load the TensorFlow checkpoint. - **config** ([`PreTrainedConfig`]) -- An instance of the configuration associated to the model. - **path** (`str`) -- A path to the TensorFlow checkpoint. - **base_model_prefix** (`str`) -- A string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model. - **is_parallelizable** (`bool`) -- A flag indicating whether this model supports model parallelization. - **main_input_name** (`str`) -- The name of the principal input to the model (often `input_ids` for NLP models, `pixel_values` for vision models and `input_values` for speech models). - push_to_hub - all Custom models should also include a `_supports_assign_param_buffer`, which determines if superfast init can apply on the particular model. Signs that your model needs this are if `test_save_and_load_from_pretrained` fails. If so, set this to `False`.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md
https://huggingface.co/docs/transformers/en/main_classes/model/#pretrainedmodel
#pretrainedmodel
.md
455_2
modeling_utils.ModuleUtilsMixin A few utilities for `torch.nn.Modules`, to be used as a mixin.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md
https://huggingface.co/docs/transformers/en/main_classes/model/#moduleutilsmixin
#moduleutilsmixin
.md
455_3
TFPreTrainedModel - push_to_hub - all
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md
https://huggingface.co/docs/transformers/en/main_classes/model/#tfpretrainedmodel
#tfpretrainedmodel
.md
455_4
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin: No module named 'h5py'
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md
https://huggingface.co/docs/transformers/en/main_classes/model/#tfmodelutilsmixin
#tfmodelutilsmixin
.md
455_5
FlaxPreTrainedModel - push_to_hub - all
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md
https://huggingface.co/docs/transformers/en/main_classes/model/#flaxpretrainedmodel
#flaxpretrainedmodel
.md
455_6
utils.PushToHubMixin A Mixin containing the functionality to push a model or tokenizer to the hub.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md
https://huggingface.co/docs/transformers/en/main_classes/model/#pushing-to-the-hub
#pushing-to-the-hub
.md
455_7
modeling_utils.load_sharded_checkpoint This is the same as [`torch.nn.Module.load_state_dict`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict) but for a sharded checkpoint. This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being loaded in the model. Args: model (`torch.nn.Module`): The model in which to load the checkpoint. folder (`str` or `os.PathLike`): A path to a folder containing the sharded checkpoint. strict (`bool`, *optional`, defaults to `True`): Whether to strictly enforce that the keys in the model state dict match the keys in the sharded checkpoint. prefer_safe (`bool`, *optional*, defaults to `False`) If both safetensors and PyTorch save files are present in checkpoint and `prefer_safe` is True, the safetensors files will be loaded. Otherwise, PyTorch files are always loaded when possible. Returns: `NamedTuple`: A named tuple with `missing_keys` and `unexpected_keys` fields - `missing_keys` is a list of str containing the missing keys - `unexpected_keys` is a list of str containing the unexpected keys
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/model.md
https://huggingface.co/docs/transformers/en/main_classes/model/#sharded-checkpoints
#sharded-checkpoints
.md
455_8
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md
https://huggingface.co/docs/transformers/en/main_classes/pipelines/
.md
456_0
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the [task summary](../task_summary) for examples of use. There are two categories of pipeline abstractions to be aware about: - The [`pipeline`] which is the most powerful object encapsulating all other pipelines. - Task-specific pipelines are available for [audio](#audio), [computer vision](#computer-vision), [natural language processing](#natural-language-processing), and [multimodal](#multimodal) tasks.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md
https://huggingface.co/docs/transformers/en/main_classes/pipelines/#pipelines
#pipelines
.md
456_1
The *pipeline* abstraction is a wrapper around all the other available pipelines. It is instantiated as any other pipeline but can provide additional quality of life. Simple call on one item: ```python >>> pipe = pipeline("text-classification") >>> pipe("This restaurant is awesome") [{'label': 'POSITIVE', 'score': 0.9998743534088135}] ``` If you want to use a specific model from the [hub](https://huggingface.co) you can ignore the task if the model on the hub already defines it: ```python >>> pipe = pipeline(model="FacebookAI/roberta-large-mnli") >>> pipe("This restaurant is awesome") [{'label': 'NEUTRAL', 'score': 0.7313136458396912}] ``` To call a pipeline on many items, you can call it with a *list*. ```python >>> pipe = pipeline("text-classification") >>> pipe(["This restaurant is awesome", "This restaurant is awful"]) [{'label': 'POSITIVE', 'score': 0.9998743534088135}, {'label': 'NEGATIVE', 'score': 0.9996669292449951}] ``` To iterate over full datasets it is recommended to use a `dataset` directly. This means you don't need to allocate the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on GPU. If it doesn't don't hesitate to create an issue. ```python import datasets from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from tqdm.auto import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = datasets.load_dataset("superb", name="asr", split="test") # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset for out in tqdm(pipe(KeyDataset(dataset, "file"))): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": ....} # .... ``` For ease of use, a generator is also possible: ```python from transformers import pipeline pipe = pipeline("text-classification") def data(): while True: # This could come from a dataset, a database, a queue or HTTP request # in a server # Caveat: because this is iterative, you cannot use `num_workers > 1` variable # to use multiple threads to preprocess data. You can still have 1 thread that # does the preprocessing while the main runs the big inference yield "This is a test" for out in pipe(data()): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": ....} # .... ``` Utility factory method to build a [`Pipeline`]. A pipeline consists of: - One or more components for pre-processing model inputs, such as a [tokenizer](tokenizer), [image_processor](image_processor), [feature_extractor](feature_extractor), or [processor](processors). - A [model](model) that generates predictions from the inputs. - Optional post-processing steps to refine the model's output, which can also be handled by processors. <Tip> While there are such optional arguments as `tokenizer`, `feature_extractor`, `image_processor`, and `processor`, they shouldn't be specified all at once. If these components are not provided, `pipeline` will try to load required ones automatically. In case you want to provide these components explicitly, please refer to a specific pipeline in order to get more details regarding what components are required. </Tip> Args: task (`str`): The task defining which pipeline will be returned. Currently accepted tasks are: - `"audio-classification"`: will return a [`AudioClassificationPipeline`]. - `"automatic-speech-recognition"`: will return a [`AutomaticSpeechRecognitionPipeline`]. - `"depth-estimation"`: will return a [`DepthEstimationPipeline`]. - `"document-question-answering"`: will return a [`DocumentQuestionAnsweringPipeline`]. - `"feature-extraction"`: will return a [`FeatureExtractionPipeline`]. - `"fill-mask"`: will return a [`FillMaskPipeline`]:. - `"image-classification"`: will return a [`ImageClassificationPipeline`]. - `"image-feature-extraction"`: will return an [`ImageFeatureExtractionPipeline`]. - `"image-segmentation"`: will return a [`ImageSegmentationPipeline`]. - `"image-text-to-text"`: will return a [`ImageTextToTextPipeline`]. - `"image-to-image"`: will return a [`ImageToImagePipeline`]. - `"image-to-text"`: will return a [`ImageToTextPipeline`]. - `"mask-generation"`: will return a [`MaskGenerationPipeline`]. - `"object-detection"`: will return a [`ObjectDetectionPipeline`]. - `"question-answering"`: will return a [`QuestionAnsweringPipeline`]. - `"summarization"`: will return a [`SummarizationPipeline`]. - `"table-question-answering"`: will return a [`TableQuestionAnsweringPipeline`]. - `"text2text-generation"`: will return a [`Text2TextGenerationPipeline`]. - `"text-classification"` (alias `"sentiment-analysis"` available): will return a [`TextClassificationPipeline`]. - `"text-generation"`: will return a [`TextGenerationPipeline`]:. - `"text-to-audio"` (alias `"text-to-speech"` available): will return a [`TextToAudioPipeline`]:. - `"token-classification"` (alias `"ner"` available): will return a [`TokenClassificationPipeline`]. - `"translation"`: will return a [`TranslationPipeline`]. - `"translation_xx_to_yy"`: will return a [`TranslationPipeline`]. - `"video-classification"`: will return a [`VideoClassificationPipeline`]. - `"visual-question-answering"`: will return a [`VisualQuestionAnsweringPipeline`]. - `"zero-shot-classification"`: will return a [`ZeroShotClassificationPipeline`]. - `"zero-shot-image-classification"`: will return a [`ZeroShotImageClassificationPipeline`]. - `"zero-shot-audio-classification"`: will return a [`ZeroShotAudioClassificationPipeline`]. - `"zero-shot-object-detection"`: will return a [`ZeroShotObjectDetectionPipeline`]. model (`str` or [`PreTrainedModel`] or [`TFPreTrainedModel`], *optional*): The model that will be used by the pipeline to make predictions. This can be a model identifier or an actual instance of a pretrained model inheriting from [`PreTrainedModel`] (for PyTorch) or [`TFPreTrainedModel`] (for TensorFlow). If not provided, the default for the `task` will be loaded. config (`str` or [`PretrainedConfig`], *optional*): The configuration that will be used by the pipeline to instantiate the model. This can be a model identifier or an actual pretrained model configuration inheriting from [`PretrainedConfig`]. If not provided, the default configuration file for the requested model will be used. That means that if `model` is given, its default configuration will be used. However, if `model` is not supplied, this `task`'s default model's config is used instead. tokenizer (`str` or [`PreTrainedTokenizer`], *optional*): The tokenizer that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained tokenizer inheriting from [`PreTrainedTokenizer`]. If not provided, the default tokenizer for the given `model` will be loaded (if it is a string). If `model` is not specified or not a string, then the default tokenizer for `config` is loaded (if it is a string). However, if `config` is also not given or not a string, then the default tokenizer for the given `task` will be loaded. feature_extractor (`str` or [`PreTrainedFeatureExtractor`], *optional*): The feature extractor that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained feature extractor inheriting from [`PreTrainedFeatureExtractor`]. Feature extractors are used for non-NLP models, such as Speech or Vision models as well as multi-modal models. Multi-modal models will also require a tokenizer to be passed. If not provided, the default feature extractor for the given `model` will be loaded (if it is a string). If `model` is not specified or not a string, then the default feature extractor for `config` is loaded (if it is a string). However, if `config` is also not given or not a string, then the default feature extractor for the given `task` will be loaded. image_processor (`str` or [`BaseImageProcessor`], *optional*): The image processor that will be used by the pipeline to preprocess images for the model. This can be a model identifier or an actual image processor inheriting from [`BaseImageProcessor`]. Image processors are used for Vision models and multi-modal models that require image inputs. Multi-modal models will also require a tokenizer to be passed. If not provided, the default image processor for the given `model` will be loaded (if it is a string). If `model` is not specified or not a string, then the default image processor for `config` is loaded (if it is a string). processor (`str` or [`ProcessorMixin`], *optional*): The processor that will be used by the pipeline to preprocess data for the model. This can be a model identifier or an actual processor inheriting from [`ProcessorMixin`]. Processors are used for multi-modal models that require multi-modal inputs, for example, a model that requires both text and image inputs. If not provided, the default processor for the given `model` will be loaded (if it is a string). If `model` is not specified or not a string, then the default processor for `config` is loaded (if it is a string). framework (`str`, *optional*): The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be installed. If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the `model`, or to PyTorch if no model is provided. revision (`str`, *optional*, defaults to `"main"`): When passing a task name or a string model identifier: The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git. use_fast (`bool`, *optional*, defaults to `True`): Whether or not to use a Fast tokenizer if possible (a [`PreTrainedTokenizerFast`]). use_auth_token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). device (`int` or `str` or `torch.device`): Defines the device (*e.g.*, `"cpu"`, `"cuda:1"`, `"mps"`, or a GPU ordinal rank like `1`) on which this pipeline will be allocated. device_map (`str` or `Dict[str, Union[int, str, torch.device]`, *optional*): Sent directly as `model_kwargs` (just a simpler shortcut). When `accelerate` library is present, set `device_map="auto"` to compute the most optimized `device_map` automatically (see [here](https://huggingface.co/docs/accelerate/main/en/package_reference/big_modeling#accelerate.cpu_offload) for more information). <Tip warning={true}> Do not use `device_map` AND `device` at the same time as they will conflict </Tip> torch_dtype (`str` or `torch.dtype`, *optional*): Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model (`torch.float16`, `torch.bfloat16`, ... or `"auto"`). trust_remote_code (`bool`, *optional*, defaults to `False`): Whether or not to allow for custom code defined on the Hub in their own modeling, configuration, tokenization or even pipeline files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. model_kwargs (`Dict[str, Any]`, *optional*): Additional dictionary of keyword arguments passed along to the model's `from_pretrained(..., **model_kwargs)` function. kwargs (`Dict[str, Any]`, *optional*): Additional keyword arguments passed along to the specific pipeline init (see the documentation for the corresponding pipeline class for possible values). Returns: [`Pipeline`]: A suitable pipeline for the task. Examples: ```python >>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer >>> # Sentiment analysis pipeline >>> analyzer = pipeline("sentiment-analysis") >>> # Question answering pipeline, specifying the checkpoint identifier >>> oracle = pipeline( ... "question-answering", model="distilbert/distilbert-base-cased-distilled-squad", tokenizer="google-bert/bert-base-cased" ... ) >>> # Named entity recognition pipeline, passing in a specific model and tokenizer >>> model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") >>> recognizer = pipeline("ner", model=model, tokenizer=tokenizer) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/pipelines.md
https://huggingface.co/docs/transformers/en/main_classes/pipelines/#the-pipeline-abstraction
#the-pipeline-abstraction
.md
456_2