filename
stringlengths
7
56
text
stringlengths
257
90.2k
model_doc/vision-text-dual-encoder.md
# VisionTextDualEncoder ## Overview The [`VisionTextDualEncoderModel`] can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit)) and any pretrained text autoencoding model as the text encoder (*e.g.* [RoBERTa](roberta), [BERT](bert)). Two projection layers are added on top of both the vision and text encoder to project the output embeddings to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text training and then can be used for zero-shot vision tasks such image-classification or retrieval. In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on new zero-shot vision tasks such as image classification or retrieval. ## VisionTextDualEncoderConfig [[autodoc]] VisionTextDualEncoderConfig ## VisionTextDualEncoderProcessor [[autodoc]] VisionTextDualEncoderProcessor ## VisionTextDualEncoderModel [[autodoc]] VisionTextDualEncoderModel - forward ## FlaxVisionTextDualEncoderModel [[autodoc]] FlaxVisionTextDualEncoderModel - __call__ ## TFVisionTextDualEncoderModel [[autodoc]] TFVisionTextDualEncoderModel - call
model_doc/ctrl.md
# CTRL ## Overview CTRL model was proposed in [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The abstract from the paper is the following: *Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution.* This model was contributed by [keskarnitishr](https://huggingface.co/keskarnitishr). The original code can be found [here](https://github.com/salesforce/ctrl). ## Usage tips - CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences or links to generate coherent text. Refer to the [original implementation](https://github.com/salesforce/ctrl) for more information. - CTRL is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be observed in the *run_generation.py* example script. - The PyTorch models can take the `past_key_values` as input, which is the previously computed key/value attention pairs. TensorFlow models accepts `past` as input. Using the `past_key_values` value prevents the model from re-computing pre-computed values in the context of text generation. See the [`forward`](model_doc/ctrl#transformers.CTRLModel.forward) method for more information on the usage of this argument. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling) ## CTRLConfig [[autodoc]] CTRLConfig ## CTRLTokenizer [[autodoc]] CTRLTokenizer - save_vocabulary ## CTRLModel [[autodoc]] CTRLModel - forward ## CTRLLMHeadModel [[autodoc]] CTRLLMHeadModel - forward ## CTRLForSequenceClassification [[autodoc]] CTRLForSequenceClassification - forward ## TFCTRLModel [[autodoc]] TFCTRLModel - call ## TFCTRLLMHeadModel [[autodoc]] TFCTRLLMHeadModel - call ## TFCTRLForSequenceClassification [[autodoc]] TFCTRLForSequenceClassification - call
model_doc/convnextv2.md
# ConvNeXt V2 ## Overview The ConvNeXt V2 model was proposed in [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. ConvNeXt V2 is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, and a successor of [ConvNeXT](convnext). The abstract from the paper is the following: *Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.* ConvNeXt V2 architecture. Taken from the original paper. This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/facebookresearch/ConvNeXt-V2). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXt V2. - [`ConvNextV2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ConvNextV2Config [[autodoc]] ConvNextV2Config ## ConvNextV2Model [[autodoc]] ConvNextV2Model - forward ## ConvNextV2ForImageClassification [[autodoc]] ConvNextV2ForImageClassification - forward ## TFConvNextV2Model [[autodoc]] TFConvNextV2Model - call ## TFConvNextV2ForImageClassification [[autodoc]] TFConvNextV2ForImageClassification - call
model_doc/fnet.md
# FNet ## Overview The FNet model was proposed in [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. The model replaces the self-attention layer in a BERT model with a fourier transform which returns only the real parts of the transform. The model is significantly faster than the BERT model because it has fewer parameters and is more memory efficient. The model achieves about 92-97% accuracy of BERT counterparts on GLUE benchmark, and trains much faster than the BERT model. The abstract from the paper is the following: *We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that "mix" input tokens. These linear mixers, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the "efficient" Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.* This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The original code can be found [here](https://github.com/google-research/google-research/tree/master/f_net). ## Usage tips The model was trained without an attention mask as it is based on Fourier Transform. The model was trained with maximum sequence length 512 which includes pad tokens. Hence, it is highly recommended to use the same maximum sequence length for fine-tuning and inference. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## FNetConfig [[autodoc]] FNetConfig ## FNetTokenizer [[autodoc]] FNetTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## FNetTokenizerFast [[autodoc]] FNetTokenizerFast ## FNetModel [[autodoc]] FNetModel - forward ## FNetForPreTraining [[autodoc]] FNetForPreTraining - forward ## FNetForMaskedLM [[autodoc]] FNetForMaskedLM - forward ## FNetForNextSentencePrediction [[autodoc]] FNetForNextSentencePrediction - forward ## FNetForSequenceClassification [[autodoc]] FNetForSequenceClassification - forward ## FNetForMultipleChoice [[autodoc]] FNetForMultipleChoice - forward ## FNetForTokenClassification [[autodoc]] FNetForTokenClassification - forward ## FNetForQuestionAnswering [[autodoc]] FNetForQuestionAnswering - forward
model_doc/pvt.md
# Pyramid Vision Transformer (PVT) ## Overview The PVT model was proposed in [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. The PVT is a type of vision transformer that utilizes a pyramid structure to make it an effective backbone for dense prediction tasks. Specifically it allows for more fine-grained inputs (4 x 4 pixels per patch) to be used, while simultaneously shrinking the sequence length of the Transformer as it deepens - reducing the computational cost. Additionally, a spatial-reduction attention (SRA) layer is used to further reduce the resource consumption when learning high-resolution features. The abstract from the paper is the following: *Although convolutional neural networks (CNNs) have achieved great success in computer vision, this work investigates a simpler, convolution-free backbone network useful for many dense prediction tasks. Unlike the recently proposed Vision Transformer (ViT) that was designed for image classification specifically, we introduce the Pyramid Vision Transformer (PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to current state of the arts. Different from ViT that typically yields low resolution outputs and incurs high computational and memory costs, PVT not only can be trained on dense partitions of an image to achieve high output resolution, which is important for dense prediction, but also uses a progressive shrinking pyramid to reduce the computations of large feature maps. PVT inherits the advantages of both CNN and Transformer, making it a unified backbone for various vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones. We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope that PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future research.* This model was contributed by [Xrenya](<https://huggingface.co/Xrenya). The original code can be found [here](https://github.com/whai362/PVT). - PVTv1 on ImageNet-1K | **Model variant** |**Size** |**Acc@1**|**Params (M)**| |--------------------|:-------:|:-------:|:------------:| | PVT-Tiny | 224 | 75.1 | 13.2 | | PVT-Small | 224 | 79.8 | 24.5 | | PVT-Medium | 224 | 81.2 | 44.2 | | PVT-Large | 224 | 81.7 | 61.4 | ## PvtConfig [[autodoc]] PvtConfig ## PvtImageProcessor [[autodoc]] PvtImageProcessor - preprocess ## PvtForImageClassification [[autodoc]] PvtForImageClassification - forward ## PvtModel [[autodoc]] PvtModel - forward
model_doc/bit.md
# Big Transfer (BiT) ## Overview The BiT model was proposed in [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. BiT is a simple recipe for scaling up pre-training of [ResNet](resnet)-like architectures (specifically, ResNetv2). The method results in significant improvements for transfer learning. The abstract from the paper is the following: *Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/big_transfer). ## Usage tips - BiT models are equivalent to ResNetv2 in terms of architecture, except that: 1) all batch normalization layers are replaced by [group normalization](https://arxiv.org/abs/1803.08494), 2) [weight standardization](https://arxiv.org/abs/1903.10520) is used for convolutional layers. The authors show that the combination of both is useful for training with large batch sizes, and has a significant impact on transfer learning. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BiT. - [`BitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## BitConfig [[autodoc]] BitConfig ## BitImageProcessor [[autodoc]] BitImageProcessor - preprocess ## BitModel [[autodoc]] BitModel - forward ## BitForImageClassification [[autodoc]] BitForImageClassification - forward
model_doc/lilt.md
# LiLT ## Overview The LiLT model was proposed in [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. LiLT allows to combine any pre-trained RoBERTa text encoder with a lightweight Layout Transformer, to enable [LayoutLM](layoutlm)-like document understanding for many languages. The abstract from the paper is the following: *Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure.* LiLT architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/jpwang/lilt). ## Usage tips - To combine the Language-Independent Layout Transformer with a new RoBERTa checkpoint from the [hub](https://huggingface.co/models?search=roberta), refer to [this guide](https://github.com/jpWang/LiLT#or-generate-your-own-checkpoint-optional). The script will result in `config.json` and `pytorch_model.bin` files being stored locally. After doing this, one can do the following (assuming you're logged in with your HuggingFace account): from transformers import LiltModel model = LiltModel.from_pretrained("path_to_your_files") model.push_to_hub("name_of_repo_on_the_hub") - When preparing data for the model, make sure to use the token vocabulary that corresponds to the RoBERTa checkpoint you combined with the Layout Transformer. - As [lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) uses the same vocabulary as [LayoutLMv3](layoutlmv3), one can use [`LayoutLMv3TokenizerFast`] to prepare data for the model. The same is true for [lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-infoxlm-base): one can use [`LayoutXLMTokenizerFast`] for that model. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LiLT. - Demo notebooks for LiLT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT). **Documentation resources** - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## LiltConfig [[autodoc]] LiltConfig ## LiltModel [[autodoc]] LiltModel - forward ## LiltForSequenceClassification [[autodoc]] LiltForSequenceClassification - forward ## LiltForTokenClassification [[autodoc]] LiltForTokenClassification - forward ## LiltForQuestionAnswering [[autodoc]] LiltForQuestionAnswering - forward
model_doc/luke.md
# LUKE ## Overview The LUKE model was proposed in [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda and Yuji Matsumoto. It is based on RoBERTa and adds entity embeddings as well as an entity-aware self-attention mechanism, which helps improve performance on various downstream tasks involving reasoning about entities such as named entity recognition, extractive and cloze-style question answering, entity typing, and relation classification. The abstract from the paper is the following: *Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering).* This model was contributed by [ikuyamada](https://huggingface.co/ikuyamada) and [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/studio-ousia/luke). ## Usage tips - This implementation is the same as [`RobertaModel`] with the addition of entity embeddings as well as an entity-aware self-attention mechanism, which improves performance on tasks involving reasoning about entities. - LUKE treats entities as input tokens; therefore, it takes `entity_ids`, `entity_attention_mask`, `entity_token_type_ids` and `entity_position_ids` as extra input. You can obtain those using [`LukeTokenizer`]. - [`LukeTokenizer`] takes `entities` and `entity_spans` (character-based start and end positions of the entities in the input text) as extra input. `entities` typically consist of [MASK] entities or Wikipedia entities. The brief description when inputting these entities are as follows: - *Inputting [MASK] entities to compute entity representations*: The [MASK] entity is used to mask entities to be predicted during pretraining. When LUKE receives the [MASK] entity, it tries to predict the original entity by gathering the information about the entity from the input text. Therefore, the [MASK] entity can be used to address downstream tasks requiring the information of entities in text such as entity typing, relation classification, and named entity recognition. - *Inputting Wikipedia entities to compute knowledge-enhanced token representations*: LUKE learns rich information (or knowledge) about Wikipedia entities during pretraining and stores the information in its entity embedding. By using Wikipedia entities as input tokens, LUKE outputs token representations enriched by the information stored in the embeddings of these entities. This is particularly effective for tasks requiring real-world knowledge, such as question answering. - There are three head models for the former use case: - [`LukeForEntityClassification`], for tasks to classify a single entity in an input text such as entity typing, e.g. the [Open Entity dataset](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html). This model places a linear head on top of the output entity representation. - [`LukeForEntityPairClassification`], for tasks to classify the relationship between two entities such as relation classification, e.g. the [TACRED dataset](https://nlp.stanford.edu/projects/tacred/). This model places a linear head on top of the concatenated output representation of the pair of given entities. - [`LukeForEntitySpanClassification`], for tasks to classify the sequence of entity spans, such as named entity recognition (NER). This model places a linear head on top of the output entity representations. You can address NER using this model by inputting all possible entity spans in the text to the model. [`LukeTokenizer`] has a `task` argument, which enables you to easily create an input to these head models by specifying `task="entity_classification"`, `task="entity_pair_classification"`, or `task="entity_span_classification"`. Please refer to the example code of each head models. Usage example: thon >>> from transformers import LukeTokenizer, LukeModel, LukeForEntityPairClassification >>> model = LukeModel.from_pretrained("studio-ousia/luke-base") >>> tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-base") # Example 1: Computing the contextualized entity representation corresponding to the entity mention "Beyoncé" >>> text = "Beyoncé lives in Los Angeles." >>> entity_spans = [(0, 7)] # character-based entity span corresponding to "Beyoncé" >>> inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt") >>> outputs = model(**inputs) >>> word_last_hidden_state = outputs.last_hidden_state >>> entity_last_hidden_state = outputs.entity_last_hidden_state # Example 2: Inputting Wikipedia entities to obtain enriched contextualized representations >>> entities = [ "Beyoncé", "Los Angeles", ] # Wikipedia entity titles corresponding to the entity mentions "Beyoncé" and "Los Angeles" >>> entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles" >>> inputs = tokenizer(text, entities=entities, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt") >>> outputs = model(**inputs) >>> word_last_hidden_state = outputs.last_hidden_state >>> entity_last_hidden_state = outputs.entity_last_hidden_state # Example 3: Classifying the relationship between two entities using LukeForEntityPairClassification head model >>> model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred") >>> tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred") >>> entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles" >>> inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> predicted_class_idx = int(logits[0].argmax()) >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) ## Resources - [A demo notebook on how to fine-tune [`LukeForEntityPairClassification`] for relation classification](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LUKE) - [Notebooks showcasing how you to reproduce the results as reported in the paper with the HuggingFace implementation of LUKE](https://github.com/studio-ousia/luke/tree/master/notebooks) - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## LukeConfig [[autodoc]] LukeConfig ## LukeTokenizer [[autodoc]] LukeTokenizer - __call__ - save_vocabulary ## LukeModel [[autodoc]] LukeModel - forward ## LukeForMaskedLM [[autodoc]] LukeForMaskedLM - forward ## LukeForEntityClassification [[autodoc]] LukeForEntityClassification - forward ## LukeForEntityPairClassification [[autodoc]] LukeForEntityPairClassification - forward ## LukeForEntitySpanClassification [[autodoc]] LukeForEntitySpanClassification - forward ## LukeForSequenceClassification [[autodoc]] LukeForSequenceClassification - forward ## LukeForMultipleChoice [[autodoc]] LukeForMultipleChoice - forward ## LukeForTokenClassification [[autodoc]] LukeForTokenClassification - forward ## LukeForQuestionAnswering [[autodoc]] LukeForQuestionAnswering - forward
model_doc/gpt_neox.md
# GPT-NeoX ## Overview We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission. In this work, we describe GPT-NeoX-20B's architecture and training and evaluate its performance on a range of language-understanding, mathematics, and knowledge-based tasks. We find that GPT-NeoX-20B is a particularly powerful few-shot reasoner and gains far more in performance when evaluated five-shot than similarly sized GPT-3 and FairSeq models. We open-source the training and evaluation code, as well as the model weights, at [https://github.com/EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox). Development of the model was led by Sid Black, Stella Biderman and Eric Hallahan, and the model was trained with generous the support of [CoreWeave](https://www.coreweave.com/). GPT-NeoX-20B was trained with fp16, thus it is recommended to initialize the model as follows: thon model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b").half().cuda() GPT-NeoX-20B also has a different tokenizer from the one used in GPT-J-6B and GPT-Neo. The new tokenizer allocates additional tokens to whitespace characters, making the model more suitable for certain tasks like code generation. ## Usage example The `generate()` method can be used to generate text using GPT Neo model. thon >>> from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast >>> model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b") >>> tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b") >>> prompt = "GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI." >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## GPTNeoXConfig [[autodoc]] GPTNeoXConfig ## GPTNeoXTokenizerFast [[autodoc]] GPTNeoXTokenizerFast ## GPTNeoXModel [[autodoc]] GPTNeoXModel - forward ## GPTNeoXForCausalLM [[autodoc]] GPTNeoXForCausalLM - forward ## GPTNeoXForQuestionAnswering [[autodoc]] GPTNeoXForQuestionAnswering - forward ## GPTNeoXForSequenceClassification [[autodoc]] GPTNeoXForSequenceClassification - forward ## GPTNeoXForTokenClassification [[autodoc]] GPTNeoXForTokenClassification - forward
model_doc/t5v1.1.md
# T5v1.1 ## Overview T5v1.1 was released in the [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) repository by Colin Raffel et al. It's an improved version of the original T5 model. This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511). ## Usage tips One can directly plug in the weights of T5v1.1 into a T5 model, like so: thon >>> from transformers import T5ForConditionalGeneration >>> model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base") T5 Version 1.1 includes the following improvements compared to the original T5 model: - GEGLU activation in the feed-forward hidden layer, rather than ReLU. See [this paper](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - No parameter sharing between the embedding and classifier layer. - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. Note: T5 Version 1.1 was only pre-trained on [C4](https://huggingface.co/datasets/c4) excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since t5v1.1 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Google has released the following variants: - [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) - [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) - [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) - [google/t5-v1_1-xl](https://huggingface.co/google/t5-v1_1-xl) - [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl). Refer to [T5's documentation page](t5) for all API reference, tips, code examples and notebooks.
model_doc/wav2vec2_phoneme.md
# Wav2Vec2Phoneme ## Overview The Wav2Vec2Phoneme model was proposed in [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al., 2021](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. The abstract from the paper is the following: *Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model.* Relevant checkpoints can be found under https://huggingface.co/models?other=phoneme-recognition. This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten) The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec). ## Usage tips - Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2 - Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2PhonemeCTCTokenizer`]. - Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass to a sequence of phonemes - By default, the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one should make use of a dictionary and language model. Wav2Vec2Phoneme's architecture is based on the Wav2Vec2 model, for API reference, check out [`Wav2Vec2`](wav2vec2)'s documentation page except for the tokenizer. ## Wav2Vec2PhonemeCTCTokenizer [[autodoc]] Wav2Vec2PhonemeCTCTokenizer - __call__ - batch_decode - decode - phonemize
model_doc/pix2struct.md
# Pix2Struct ## Overview The Pix2Struct model was proposed in [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. The abstract from the paper is the following: > Visually-situated language is ubiquitous -- sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domain-specific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images. Tips: Pix2Struct has been fine tuned on a variety of tasks and datasets, ranging from image captioning, visual question answering (VQA) over different inputs (books, charts, science diagrams), captioning UI components etc. The full list can be found in Table 1 of the paper. We therefore advise you to use these models for the tasks they have been fine tuned on. For instance, if you want to use Pix2Struct for UI captioning, you should use the model fine tuned on the UI dataset. If you want to use Pix2Struct for image captioning, you should use the model fine tuned on the natural images captioning dataset and so on. If you want to use the model to perform conditional text captioning, make sure to use the processor with `add_special_tokens=False`. This model was contributed by [ybelkada](https://huggingface.co/ybelkada). The original code can be found [here](https://github.com/google-research/pix2struct). ## Resources - [Fine-tuning Notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb) - [All models](https://huggingface.co/models?search=pix2struct) ## Pix2StructConfig [[autodoc]] Pix2StructConfig - from_text_vision_configs ## Pix2StructTextConfig [[autodoc]] Pix2StructTextConfig ## Pix2StructVisionConfig [[autodoc]] Pix2StructVisionConfig ## Pix2StructProcessor [[autodoc]] Pix2StructProcessor ## Pix2StructImageProcessor [[autodoc]] Pix2StructImageProcessor - preprocess ## Pix2StructTextModel [[autodoc]] Pix2StructTextModel - forward ## Pix2StructVisionModel [[autodoc]] Pix2StructVisionModel - forward ## Pix2StructForConditionalGeneration [[autodoc]] Pix2StructForConditionalGeneration - forward
model_doc/transfo-xl.md
# Transformer XL This model is in maintenance mode only, so we won't accept any new PRs changing its code. This model was deprecated due to security issues linked to `pickle.load`. We recommend switching to more recent models for improved security. In case you would still like to use `TransfoXL` in your experiments, we recommend using the [Hub checkpoint](https://huggingface.co/transfo-xl-wt103) with a specific revision to ensure you are downloading safe files from the Hub: from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel checkpoint = 'transfo-xl-wt103' revision = '40a186da79458c9f9de846edfaea79c412137f97' tokenizer = TransfoXLTokenizer.from_pretrained(checkpoint, revision=revision) model = TransfoXLLMHeadModel.from_pretrained(checkpoint, revision=revision) If you run into any issues running this model, please reinstall the last version that supported this model: v4.35.0. You can do so by running the following command: `pip install -U transformers==4.35.0`. ## Overview The Transformer-XL model was proposed in [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied). The abstract from the paper is the following: *Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens.* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/kimiyoung/transformer-xl). ## Usage tips - Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left. - Transformer-XL is one of the few models that has no sequence length limit. - Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model. - Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments. - This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed. TransformerXL does **not** work with *torch.nn.DataParallel* due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035) ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Causal language modeling task guide](../tasks/language_modeling) ## TransfoXLConfig [[autodoc]] TransfoXLConfig ## TransfoXLTokenizer [[autodoc]] TransfoXLTokenizer - save_vocabulary ## TransfoXL specific outputs [[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput [[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput [[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput [[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput ## TransfoXLModel [[autodoc]] TransfoXLModel - forward ## TransfoXLLMHeadModel [[autodoc]] TransfoXLLMHeadModel - forward ## TransfoXLForSequenceClassification [[autodoc]] TransfoXLForSequenceClassification - forward ## TFTransfoXLModel [[autodoc]] TFTransfoXLModel - call ## TFTransfoXLLMHeadModel [[autodoc]] TFTransfoXLLMHeadModel - call ## TFTransfoXLForSequenceClassification [[autodoc]] TFTransfoXLForSequenceClassification - call ## Internal Layers [[autodoc]] AdaptiveEmbedding [[autodoc]] TFAdaptiveEmbedding
model_doc/gpt-sw3.md
# GPT-Sw3 ## Overview The GPT-Sw3 model was first proposed in [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. Since that first paper the authors have extended their work and trained new models on their new 1.2TB corpora named The Nordic Pile. GPT-Sw3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. This model was contributed by [AI Sweden](https://huggingface.co/AI-Sweden). ## Usage example thon >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("AI-Sweden/gpt-sw3-356m") >>> model = AutoModelForCausalLM.from_pretrained("AI-Sweden/gpt-sw3-356m") >>> input_ids = tokenizer("Träd är fina för att", return_tensors="pt")["input_ids"] >>> generated_token_ids = model.generate(inputs=input_ids, max_new_tokens=10, do_sample=True)[0] >>> print(tokenizer.decode(generated_token_ids)) Träd är fina för att de är färgstarka. Men ibland är det fint ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Causal language modeling task guide](../tasks/language_modeling) The implementation uses the `GPT2Model` coupled with our `GPTSw3Tokenizer`. Refer to [GPT2Model documentation](gpt2) for API reference and examples. Note that sentencepiece is required to use our tokenizer and can be installed with `pip install transformers[sentencepiece]` or `pip install sentencepiece` ## GPTSw3Tokenizer [[autodoc]] GPTSw3Tokenizer - save_vocabulary
model_doc/visual_bert.md
# VisualBERT ## Overview The VisualBERT model was proposed in [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. VisualBERT is a neural network trained on a variety of (image, text) pairs. The abstract from the paper is the following: *We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments.* This model was contributed by [gchhablani](https://huggingface.co/gchhablani). The original code can be found [here](https://github.com/uclanlp/visualbert). ## Usage tips 1. Most of the checkpoints provided work with the [`VisualBertForPreTraining`] configuration. Other checkpoints provided are the fine-tuned checkpoints for down-stream tasks - VQA ('visualbert-vqa'), VCR ('visualbert-vcr'), NLVR2 ('visualbert-nlvr2'). Hence, if you are not working on these downstream tasks, it is recommended that you use the pretrained checkpoints. 2. For the VCR task, the authors use a fine-tuned detector for generating visual embeddings, for all the checkpoints. We do not provide the detector and its weights as a part of the package, but it will be available in the research projects, and the states can be loaded directly into the detector provided. VisualBERT is a multi-modal vision and language model. It can be used for visual question answering, multiple choice, visual reasoning and region-to-phrase correspondence tasks. VisualBERT uses a BERT-like transformer to prepare embeddings for image-text pairs. Both the text and visual features are then projected to a latent space with identical dimension. To feed images to the model, each image is passed through a pre-trained object detector and the regions and the bounding boxes are extracted. The authors use the features generated after passing these regions through a pre-trained CNN like ResNet as visual embeddings. They also add absolute position embeddings, and feed the resulting sequence of vectors to a standard BERT model. The text input is concatenated in the front of the visual embeddings in the embedding layer, and is expected to be bound by [CLS] and a [SEP] tokens, as in BERT. The segment IDs must also be set appropriately for the textual and visual parts. The [`BertTokenizer`] is used to encode the text. A custom detector/image processor must be used to get the visual embeddings. The following example notebooks show how to use VisualBERT with Detectron-like models: - [VisualBERT VQA demo notebook](https://github.com/huggingface/transformers/tree/main/examples/research_projects/visual_bert) : This notebook contains an example on VisualBERT VQA. - [Generate Embeddings for VisualBERT (Colab Notebook)](https://colab.research.google.com/drive/1bLGxKdldwqnMVA5x4neY7-l_8fKGWQYI?usp=sharing) : This notebook contains an example on how to generate visual embeddings. The following example shows how to get the last hidden state using [`VisualBertModel`]: thon >>> import torch >>> from transformers import BertTokenizer, VisualBertModel >>> model = VisualBertModel.from_pretrained("uclanlp/visualbert-vqa-coco-pre") >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> inputs = tokenizer("What is the man eating?", return_tensors="pt") >>> # this is a custom function that returns the visual embeddings given the image path >>> visual_embeds = get_visual_embeddings(image_path) >>> visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long) >>> visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float) >>> inputs.update( { "visual_embeds": visual_embeds, "visual_token_type_ids": visual_token_type_ids, "visual_attention_mask": visual_attention_mask, } ) >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state ## VisualBertConfig [[autodoc]] VisualBertConfig ## VisualBertModel [[autodoc]] VisualBertModel - forward ## VisualBertForPreTraining [[autodoc]] VisualBertForPreTraining - forward ## VisualBertForQuestionAnswering [[autodoc]] VisualBertForQuestionAnswering - forward ## VisualBertForMultipleChoice [[autodoc]] VisualBertForMultipleChoice - forward ## VisualBertForVisualReasoning [[autodoc]] VisualBertForVisualReasoning - forward ## VisualBertForRegionToPhraseAlignment [[autodoc]] VisualBertForRegionToPhraseAlignment - forward
model_doc/dinov2.md
# DINOv2 ## Overview The DINOv2 model was proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski. DINOv2 is an upgrade of [DINO](https://arxiv.org/abs/2104.14294), a self-supervised method applied on [Vision Transformers](vit). This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. The abstract from the paper is the following: *The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/dinov2). ## Usage tips The model can be traced using `torch.jit.trace` which leverages JIT compilation to optimize the model making it faster to run. Note this still produces some mis-matched elements and the difference between the original model and the traced model is of the order of 1e-4. thon import torch from transformers import AutoImageProcessor, AutoModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base') model = AutoModel.from_pretrained('facebook/dinov2-base') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs[0] # We have to force return_dict=False for tracing model.config.return_dict = False with torch.no_grad(): traced_model = torch.jit.trace(model, [inputs.pixel_values]) traced_outputs = traced_model(inputs.pixel_values) print((last_hidden_states - traced_outputs[0]).abs().max()) ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT. - Demo notebooks for DINOv2 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DINOv2). 🌎 - [`Dinov2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## Dinov2Config [[autodoc]] Dinov2Config ## Dinov2Model [[autodoc]] Dinov2Model - forward ## Dinov2ForImageClassification [[autodoc]] Dinov2ForImageClassification - forward
model_doc/canine.md
# CANINE ## Overview The CANINE model was proposed in [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It's among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level. Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient downsampling strategy, before applying a deep Transformer encoder. The abstract from the paper is the following: *Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/language/tree/master/language/canine). ## Usage tips - CANINE uses no less than 3 Transformer encoders internally: 2 "shallow" encoders (which only consist of a single layer) and 1 "deep" encoder (which is a regular BERT encoder). First, a "shallow" encoder is used to contextualize the character embeddings, using local attention. Next, after downsampling, a "deep" encoder is applied. Finally, after upsampling, a "shallow" encoder is used to create the final character embeddings. Details regarding up- and downsampling can be found in the paper. - CANINE uses a max sequence length of 2048 characters by default. One can use [`CanineTokenizer`] to prepare text for the model. - Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token (which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The details for this can be found in the paper. Model checkpoints: - [google/canine-c](https://huggingface.co/google/canine-c): Pre-trained with autoregressive character loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). - [google/canine-s](https://huggingface.co/google/canine-s): Pre-trained with subword loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). ## Usage example CANINE works on raw characters, so it can be used **without a tokenizer**: thon >>> from transformers import CanineModel >>> import torch >>> model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss >>> text = "hello world" >>> # use Python's built-in ord() function to turn each character into its unicode code point id >>> input_ids = torch.tensor([[ord(char) for char in text]]) >>> outputs = model(input_ids) # forward pass >>> pooled_output = outputs.pooler_output >>> sequence_output = outputs.last_hidden_state For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all sequences to the same length): thon >>> from transformers import CanineTokenizer, CanineModel >>> model = CanineModel.from_pretrained("google/canine-c") >>> tokenizer = CanineTokenizer.from_pretrained("google/canine-c") >>> inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."] >>> encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt") >>> outputs = model(**encoding) # forward pass >>> pooled_output = outputs.pooler_output >>> sequence_output = outputs.last_hidden_state ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Multiple choice task guide](../tasks/multiple_choice) ## CanineConfig [[autodoc]] CanineConfig ## CanineTokenizer [[autodoc]] CanineTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences ## CANINE specific outputs [[autodoc]] models.canine.modeling_canine.CanineModelOutputWithPooling ## CanineModel [[autodoc]] CanineModel - forward ## CanineForSequenceClassification [[autodoc]] CanineForSequenceClassification - forward ## CanineForMultipleChoice [[autodoc]] CanineForMultipleChoice - forward ## CanineForTokenClassification [[autodoc]] CanineForTokenClassification - forward ## CanineForQuestionAnswering [[autodoc]] CanineForQuestionAnswering - forward
model_doc/upernet.md
# UPerNet ## Overview The UPerNet model was proposed in [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. UPerNet is a general framework to effectively segment a wide range of concepts from images, leveraging any vision backbone like [ConvNeXt](convnext) or [Swin](swin). The abstract from the paper is the following: *Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes.* UPerNet framework. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code is based on OpenMMLab's mmsegmentation [here](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/uper_head.py). ## Usage examples UPerNet is a general framework for semantic segmentation. It can be used with any vision backbone, like so: from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"]) config = UperNetConfig(backbone_config=backbone_config) model = UperNetForSemanticSegmentation(config) To use another vision backbone, like [ConvNeXt](convnext), simply instantiate the model with the appropriate backbone: from transformers import ConvNextConfig, UperNetConfig, UperNetForSemanticSegmentation backbone_config = ConvNextConfig(out_features=["stage1", "stage2", "stage3", "stage4"]) config = UperNetConfig(backbone_config=backbone_config) model = UperNetForSemanticSegmentation(config) Note that this will randomly initialize all the weights of the model. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with UPerNet. - Demo notebooks for UPerNet can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/UPerNet). - [`UperNetForSemanticSegmentation`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb). - See also: [Semantic segmentation task guide](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## UperNetConfig [[autodoc]] UperNetConfig ## UperNetForSemanticSegmentation [[autodoc]] UperNetForSemanticSegmentation - forward
model_doc/phi.md
# Phi ## Overview The Phi-1 model was proposed in [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li. The Phi-1.5 model was proposed in [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463) by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee. ### Summary In Phi-1 and Phi-1.5 papers, the authors showed how important the quality of the data is in training relative to the model size. They selected high quality "textbook" data alongside with synthetically generated data for training their small sized Transformer based model Phi-1 with 1.3B parameters. Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. They follow the same strategy for Phi-1.5 and created another 1.3B parameter model with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs. Phi-1.5 exhibits many of the traits of much larger LLMs such as the ability to “think step by step” or perform some rudimentary in-context learning. With these two experiments the authors successfully showed the huge impact of quality of training data when training machine learning models. The abstract from the Phi-1 paper is the following: *We introduce phi-1, a new large language model for code, with significantly smaller size than competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of “textbook quality” data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays surprising emergent properties compared to phi-1-base, our model before our finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller model with 350M parameters trained with the same pipeline as phi-1 that still achieves 45% on HumanEval.* The abstract from the Phi-1.5 paper is the following: *We continue the investigation into the power of smaller Transformer-based language models as initiated by TinyStories – a 10 million parameter model that can produce coherent English – and the follow-up work on phi-1, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate “textbook quality” data as a way to enhance the learning process compared to traditional web data. We follow the “Textbooks Are All You Need” approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named phi-1.5, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, phi-1.5 exhibits many of the traits of much larger LLMs, both good –such as the ability to “think step by step” or perform some rudimentary in-context learning– and bad, including hallucinations and the potential for toxic and biased generations –encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source phi-1.5 to promote further research on these urgent topics.* This model was contributed by [Susnato Dhar](https://huggingface.co/susnato). The original code for Phi-1 and Phi-1.5 can be found [here](https://huggingface.co/microsoft/phi-1/blob/main/modeling_mixformer_sequential.py) and [here](https://huggingface.co/microsoft/phi-1_5/blob/main/modeling_mixformer_sequential.py) respectively. ## Usage tips - This model is quite similar to `Llama` with the main difference in [`PhiDecoderLayer`], where they used [`PhiAttention`] and [`PhiMLP`] layers in parallel configuration. - The tokenizer used for this model is identical to the [`CodeGenTokenizer`]. ### Example : thon >>> from transformers import PhiForCausalLM, AutoTokenizer >>> # define the model and tokenzier. >>> model = PhiForCausalLM.from_pretrained("susnato/phi-1_5_dev") >>> tokenizer = AutoTokenizer.from_pretrained("susnato/phi-1_5_dev") >>> # feel free to change the prompt to your liking. >>> prompt = "If I were an AI that had just achieved" >>> # apply the tokenizer. >>> tokens = tokenizer(prompt, return_tensors="pt") >>> # use the model to generate new tokens. >>> generated_output = model.generate(**tokens, use_cache=True, max_new_tokens=10) >>> tokenizer.batch_decode(generated_output)[0] 'If I were an AI that had just achieved a breakthrough in machine learning, I would be thrilled' ## PhiConfig [[autodoc]] PhiConfig ## PhiModel [[autodoc]] PhiModel - forward ## PhiForCausalLM [[autodoc]] PhiForCausalLM - forward - generate ## PhiForSequenceClassification [[autodoc]] PhiForSequenceClassification - forward ## PhiForTokenClassification [[autodoc]] PhiForTokenClassification - forward
model_doc/idefics.md
# IDEFICS ## Overview The IDEFICS model was proposed in [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents ](https://huggingface.co/papers/2306.16527 ) by Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh The abstract from the paper is the following: *Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks that require reasoning over one or multiple images to generate a text. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELISC, we train an 80 billion parameters vision and language model on the dataset and obtain competitive performance on various multimodal benchmarks. We release the code to reproduce the dataset along with the dataset itself.* This model was contributed by [HuggingFaceM4](https://huggingface.co/HuggingFaceM4). The original code can be found [here](). (TODO: don't have a public link yet). IDEFICS modeling code in Transformers is for finetuning and inferencing the pre-trained IDEFICS models. To train a new IDEFICS model from scratch use the m4 codebase (a link will be provided once it's made public) ## IdeficsConfig [[autodoc]] IdeficsConfig ## IdeficsModel [[autodoc]] IdeficsModel - forward ## IdeficsForVisionText2Text [[autodoc]] IdeficsForVisionText2Text - forward ## IdeficsImageProcessor [[autodoc]] IdeficsImageProcessor - preprocess ## IdeficsProcessor [[autodoc]] IdeficsProcessor - __call__
model_doc/mra.md
# MRA ## Overview The MRA model was proposed in [Multi Resolution Analysis (MRA) for Approximate Self-Attention](https://arxiv.org/abs/2207.10284) by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh. The abstract from the paper is the following: *Transformers have emerged as a preferred model for many tasks in natural language processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.* This model was contributed by [novice03](https://huggingface.co/novice03). The original code can be found [here](https://github.com/mlpen/mra-attention). ## MraConfig [[autodoc]] MraConfig ## MraModel [[autodoc]] MraModel - forward ## MraForMaskedLM [[autodoc]] MraForMaskedLM - forward ## MraForSequenceClassification [[autodoc]] MraForSequenceClassification - forward ## MraForMultipleChoice [[autodoc]] MraForMultipleChoice - forward ## MraForTokenClassification [[autodoc]] MraForTokenClassification - forward ## MraForQuestionAnswering [[autodoc]] MraForQuestionAnswering - forward
model_doc/gptj.md
# GPT-J ## Overview The GPT-J model was released in the [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) repository by Ben Wang and Aran Komatsuzaki. It is a GPT-2-like causal language model trained on [the Pile](https://pile.eleuther.ai/) dataset. This model was contributed by [Stella Biderman](https://huggingface.co/stellaathena). ## Usage tips - To load [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) in float32 one would need at least 2x model size RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would take at least 48GB RAM to just load the model. To reduce the RAM usage there are a few options. The `torch_dtype` argument can be used to initialize the model in half-precision on a CUDA device only. There is also a fp16 branch which stores the fp16 weights, which could be used to further minimize the RAM usage: thon >>> from transformers import GPTJForCausalLM >>> import torch >>> device = "cuda" >>> model = GPTJForCausalLM.from_pretrained( "EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16, ).to(device) - The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients. So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This is not including the activations and data batches, which would again require some more GPU RAM. So one should explore solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for that could be found [here](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/howto_finetune.md) - Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra tokens are added for the sake of efficiency on TPUs. To avoid the mismatch between embedding matrix size and vocab size, the tokenizer for [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) contains 143 extra tokens `<|extratoken_1|> <|extratoken_143|>`, so the `vocab_size` of tokenizer also becomes 50400. ## Usage examples The [`~generation.GenerationMixin.generate`] method can be used to generate text using GPT-J model. thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") >>> prompt = ( "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " "previously unexplored valley, in the Andes Mountains. Even more surprising to the " "researchers was the fact that the unicorns spoke perfect English." ) >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] or in float16 precision: thon >>> from transformers import GPTJForCausalLM, AutoTokenizer >>> import torch >>> device = "cuda" >>> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to(device) >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") >>> prompt = ( "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " "previously unexplored valley, in the Andes Mountains. Even more surprising to the " "researchers was the fact that the unicorns spoke perfect English." ) >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) >>> gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT-J. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - Description of [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B). - A blog on how to [Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker](https://huggingface.co/blog/gptj-sagemaker). - A blog on how to [Accelerate GPT-J inference with DeepSpeed-Inference on GPUs](https://www.philschmid.de/gptj-deepspeed-inference). - A blog post introducing [GPT-J-6B: 6B JAX-Based Transformer](https://arankomatsuzaki.wordpress.com/2021/06/04/gpt-j/). 🌎 - A notebook for [GPT-J-6B Inference Demo](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb). 🌎 - Another notebook demonstrating [Inference with GPT-J-6B](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/GPT-J-6B/Inference_with_GPT_J_6B.ipynb). - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. - [`GPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation), and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFGPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxGPTJForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb). **Documentation resources** - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) ## GPTJConfig [[autodoc]] GPTJConfig - all ## GPTJModel [[autodoc]] GPTJModel - forward ## GPTJForCausalLM [[autodoc]] GPTJForCausalLM - forward ## GPTJForSequenceClassification [[autodoc]] GPTJForSequenceClassification - forward ## GPTJForQuestionAnswering [[autodoc]] GPTJForQuestionAnswering - forward ## TFGPTJModel [[autodoc]] TFGPTJModel - call ## TFGPTJForCausalLM [[autodoc]] TFGPTJForCausalLM - call ## TFGPTJForSequenceClassification [[autodoc]] TFGPTJForSequenceClassification - call ## TFGPTJForQuestionAnswering [[autodoc]] TFGPTJForQuestionAnswering - call ## FlaxGPTJModel [[autodoc]] FlaxGPTJModel - __call__ ## FlaxGPTJForCausalLM [[autodoc]] FlaxGPTJForCausalLM - __call__
model_doc/clap.md
# CLAP ## Overview The CLAP model was proposed in [Large Scale Contrastive Language-Audio pretraining with feature fusion and keyword-to-caption augmentation](https://arxiv.org/pdf/2211.06687.pdf) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score. The abstract from the paper is the following: *Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zeroshot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-6* This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ) . The original code can be found [here](https://github.com/LAION-AI/Clap). ## ClapConfig [[autodoc]] ClapConfig - from_text_audio_configs ## ClapTextConfig [[autodoc]] ClapTextConfig ## ClapAudioConfig [[autodoc]] ClapAudioConfig ## ClapFeatureExtractor [[autodoc]] ClapFeatureExtractor ## ClapProcessor [[autodoc]] ClapProcessor ## ClapModel [[autodoc]] ClapModel - forward - get_text_features - get_audio_features ## ClapTextModel [[autodoc]] ClapTextModel - forward ## ClapTextModelWithProjection [[autodoc]] ClapTextModelWithProjection - forward ## ClapAudioModel [[autodoc]] ClapAudioModel - forward ## ClapAudioModelWithProjection [[autodoc]] ClapAudioModelWithProjection - forward
model_doc/roberta-prelayernorm.md
# RoBERTa-PreLayerNorm ## Overview The RoBERTa-PreLayerNorm model was proposed in [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. It is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/). The abstract from the paper is the following: *fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs.* This model was contributed by [andreasmaden](https://huggingface.co/andreasmadsen). The original code can be found [here](https://github.com/princeton-nlp/DinkyTrain). ## Usage tips - The implementation is the same as [Roberta](roberta) except instead of using _Add and Norm_ it does _Norm and Add_. _Add_ and _Norm_ refers to the Addition and LayerNormalization as described in [Attention Is All You Need](https://arxiv.org/abs/1706.03762). - This is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## RobertaPreLayerNormConfig [[autodoc]] RobertaPreLayerNormConfig ## RobertaPreLayerNormModel [[autodoc]] RobertaPreLayerNormModel - forward ## RobertaPreLayerNormForCausalLM [[autodoc]] RobertaPreLayerNormForCausalLM - forward ## RobertaPreLayerNormForMaskedLM [[autodoc]] RobertaPreLayerNormForMaskedLM - forward ## RobertaPreLayerNormForSequenceClassification [[autodoc]] RobertaPreLayerNormForSequenceClassification - forward ## RobertaPreLayerNormForMultipleChoice [[autodoc]] RobertaPreLayerNormForMultipleChoice - forward ## RobertaPreLayerNormForTokenClassification [[autodoc]] RobertaPreLayerNormForTokenClassification - forward ## RobertaPreLayerNormForQuestionAnswering [[autodoc]] RobertaPreLayerNormForQuestionAnswering - forward ## TFRobertaPreLayerNormModel [[autodoc]] TFRobertaPreLayerNormModel - call ## TFRobertaPreLayerNormForCausalLM [[autodoc]] TFRobertaPreLayerNormForCausalLM - call ## TFRobertaPreLayerNormForMaskedLM [[autodoc]] TFRobertaPreLayerNormForMaskedLM - call ## TFRobertaPreLayerNormForSequenceClassification [[autodoc]] TFRobertaPreLayerNormForSequenceClassification - call ## TFRobertaPreLayerNormForMultipleChoice [[autodoc]] TFRobertaPreLayerNormForMultipleChoice - call ## TFRobertaPreLayerNormForTokenClassification [[autodoc]] TFRobertaPreLayerNormForTokenClassification - call ## TFRobertaPreLayerNormForQuestionAnswering [[autodoc]] TFRobertaPreLayerNormForQuestionAnswering - call ## FlaxRobertaPreLayerNormModel [[autodoc]] FlaxRobertaPreLayerNormModel - __call__ ## FlaxRobertaPreLayerNormForCausalLM [[autodoc]] FlaxRobertaPreLayerNormForCausalLM - __call__ ## FlaxRobertaPreLayerNormForMaskedLM [[autodoc]] FlaxRobertaPreLayerNormForMaskedLM - __call__ ## FlaxRobertaPreLayerNormForSequenceClassification [[autodoc]] FlaxRobertaPreLayerNormForSequenceClassification - __call__ ## FlaxRobertaPreLayerNormForMultipleChoice [[autodoc]] FlaxRobertaPreLayerNormForMultipleChoice - __call__ ## FlaxRobertaPreLayerNormForTokenClassification [[autodoc]] FlaxRobertaPreLayerNormForTokenClassification - __call__ ## FlaxRobertaPreLayerNormForQuestionAnswering [[autodoc]] FlaxRobertaPreLayerNormForQuestionAnswering - __call__
model_doc/herbert.md
# HerBERT ## Overview The HerBERT model was proposed in [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic masking of whole words. The abstract from the paper is the following: *In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models.* This model was contributed by [rmroczkowski](https://huggingface.co/rmroczkowski). The original code can be found [here](https://github.com/allegro/HerBERT). ## Usage example thon >>> from transformers import HerbertTokenizer, RobertaModel >>> tokenizer = HerbertTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") >>> model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") >>> encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors="pt") >>> outputs = model(encoded_input) >>> # HerBERT can also be loaded using AutoTokenizer and AutoModel: >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") >>> model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1") Herbert implementation is the same as `BERT` except for the tokenization method. Refer to [BERT documentation](bert) for API reference and examples. ## HerbertTokenizer [[autodoc]] HerbertTokenizer ## HerbertTokenizerFast [[autodoc]] HerbertTokenizerFast
model_doc/bridgetower.md
# BridgeTower ## Overview The BridgeTower model was proposed in [BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder thus achieving remarkable performance on various downstream tasks with almost negligible additional performance and computational costs. This paper has been accepted to the [AAAI'23](https://aaai.org/Conferences/AAAI-23/) conference. The abstract from the paper is the following: *Vision-Language (VL) models with the TWO-TOWER architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BRIDGETOWER, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the crossmodal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BRIDGETOWER achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BRIDGETOWER achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BRIDGETOWER achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.* BridgeTower architecture. Taken from the original paper. This model was contributed by [Anahita Bhiwandiwalla](https://huggingface.co/anahita-b), [Tiep Le](https://huggingface.co/Tile) and [Shaoyen Tseng](https://huggingface.co/shaoyent). The original code can be found [here](https://github.com/microsoft/BridgeTower). ## Usage tips and examples BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge layers. The goal of this approach was to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder. In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture. The [`BridgeTowerProcessor`] wraps [`RobertaTokenizer`] and [`BridgeTowerImageProcessor`] into a single instance to both encode the text and prepare the images respectively. The following example shows how to run contrastive learning using [`BridgeTowerProcessor`] and [`BridgeTowerForContrastiveLearning`]. thon >>> from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning >>> import requests >>> from PIL import Image >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") >>> model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") >>> # forward pass >>> scores = dict() >>> for text in texts: # prepare inputs encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs The following example shows how to run image-text retrieval using [`BridgeTowerProcessor`] and [`BridgeTowerForImageAndTextRetrieval`]. thon >>> from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval >>> import requests >>> from PIL import Image >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> # forward pass >>> scores = dict() >>> for text in texts: # prepare inputs encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0, 1].item() The following example shows how to run masked language modeling using [`BridgeTowerProcessor`] and [`BridgeTowerForMaskedLM`]. thon >>> from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000360943.jpg" >>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB") >>> text = "a looking out of the window" >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> # prepare inputs >>> encoding = processor(image, text, return_tensors="pt") >>> # forward pass >>> outputs = model(**encoding) >>> results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) >>> print(results) .a cat looking out of the window. Tips: - This implementation of BridgeTower uses [`RobertaTokenizer`] to generate text embeddings and OpenAI's CLIP/ViT model to compute visual embeddings. - Checkpoints for pre-trained [bridgeTower-base](https://huggingface.co/BridgeTower/bridgetower-base) and [bridgetower masked language modeling and image text matching](https://huggingface.co/BridgeTower/bridgetower-base-itm-mlm) are released. - Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other down stream tasks. - The PyTorch version of this model is only available in torch 1.10 and higher. ## BridgeTowerConfig [[autodoc]] BridgeTowerConfig ## BridgeTowerTextConfig [[autodoc]] BridgeTowerTextConfig ## BridgeTowerVisionConfig [[autodoc]] BridgeTowerVisionConfig ## BridgeTowerImageProcessor [[autodoc]] BridgeTowerImageProcessor - preprocess ## BridgeTowerProcessor [[autodoc]] BridgeTowerProcessor - __call__ ## BridgeTowerModel [[autodoc]] BridgeTowerModel - forward ## BridgeTowerForContrastiveLearning [[autodoc]] BridgeTowerForContrastiveLearning - forward ## BridgeTowerForMaskedLM [[autodoc]] BridgeTowerForMaskedLM - forward ## BridgeTowerForImageAndTextRetrieval [[autodoc]] BridgeTowerForImageAndTextRetrieval - forward
model_doc/cpmant.md
# CPMAnt ## Overview CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. [See more](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) This model was contributed by [OpenBMB](https://huggingface.co/openbmb). The original code can be found [here](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live). ## Resources - A tutorial on [CPM-Live](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live). ## CpmAntConfig [[autodoc]] CpmAntConfig - all ## CpmAntTokenizer [[autodoc]] CpmAntTokenizer - all ## CpmAntModel [[autodoc]] CpmAntModel - all ## CpmAntForCausalLM [[autodoc]] CpmAntForCausalLM - all
model_doc/focalnet.md
# FocalNet ## Overview The FocalNet model was proposed in [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. FocalNets completely replace self-attention (used in models like [ViT](vit) and [Swin](swin)) by a focal modulation mechanism for modeling token interactions in vision. The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation. The abstract from the paper is the following: *We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision. Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224 and 384, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1\times outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and Mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new SoTA on top of much larger attention-based models like Swinv2-G and BEIT-3.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/FocalNet). ## FocalNetConfig [[autodoc]] FocalNetConfig ## FocalNetModel [[autodoc]] FocalNetModel - forward ## FocalNetForMaskedImageModeling [[autodoc]] FocalNetForMaskedImageModeling - forward ## FocalNetForImageClassification [[autodoc]] FocalNetForImageClassification - forward
model_doc/opt.md
# OPT ## Overview The OPT model was proposed in [Open Pre-trained Transformer Language Models](https://arxiv.org/pdf/2205.01068) by Meta AI. OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3. The abstract from the paper is the following: *Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.* This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ), [Younes Belkada](https://huggingface.co/ybelkada), and [Patrick Von Platen](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/facebookresearch/metaseq). Tips: - OPT has the same architecture as [`BartDecoder`]. - Contrary to GPT2, OPT adds the EOS token `` to the beginning of every prompt. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OPT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. - A notebook on [fine-tuning OPT with PEFT, bitsandbytes, and Transformers](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing). 🌎 - A blog post on [decoding strategies with OPT](https://huggingface.co/blog/introducing-csearch#62-example-two---opt). - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. - [`OPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling). - [Text classification task guide](sequence_classification.md) - [`OPTForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`OPTForQuestionAnswering`] is supported by this [question answering example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. ⚡️ Inference - A blog post on [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models) with OPT. ## Combining OPT and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``) To load and run a model using Flash Attention 2, refer to the snippet below: thon >>> import torch >>> from transformers import OPTForCausalLM, GPT2Tokenizer >>> device = "cuda" # the device to load the model onto >>> model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, use_flash_attention_2=True) >>> tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m") >>> prompt = ("A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the " "Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived " "there?") >>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device) >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False) >>> tokenizer.batch_decode(generated_ids)[0] 'A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived there?\nStatue: I have lived here for about a year.\nHuman: What is your favorite place to eat?\nStatue: I love' ### Expected speedups Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-2.7b` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths. Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-350m` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths. ## OPTConfig [[autodoc]] OPTConfig ## OPTModel [[autodoc]] OPTModel - forward ## OPTForCausalLM [[autodoc]] OPTForCausalLM - forward ## OPTForSequenceClassification [[autodoc]] OPTForSequenceClassification - forward ## OPTForQuestionAnswering [[autodoc]] OPTForQuestionAnswering - forward ## TFOPTModel [[autodoc]] TFOPTModel - call ## TFOPTForCausalLM [[autodoc]] TFOPTForCausalLM - call ## FlaxOPTModel [[autodoc]] FlaxOPTModel - __call__ ## FlaxOPTForCausalLM [[autodoc]] FlaxOPTForCausalLM - __call__
model_doc/blenderbot-small.md
# Blenderbot Small Note that [`BlenderbotSmallModel`] and [`BlenderbotSmallForConditionalGeneration`] are only used in combination with the checkpoint [facebook/blenderbot-90M](https://huggingface.co/facebook/blenderbot-90M). Larger Blenderbot checkpoints should instead be used with [`BlenderbotModel`] and [`BlenderbotForConditionalGeneration`] ## Overview The Blender chatbot model was proposed in [Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020. The abstract of the paper is the following: *Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The authors' code can be found [here](https://github.com/facebookresearch/ParlAI). ## Usage tips Blenderbot Small is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. ## Resources - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## BlenderbotSmallConfig [[autodoc]] BlenderbotSmallConfig ## BlenderbotSmallTokenizer [[autodoc]] BlenderbotSmallTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## BlenderbotSmallTokenizerFast [[autodoc]] BlenderbotSmallTokenizerFast ## BlenderbotSmallModel [[autodoc]] BlenderbotSmallModel - forward ## BlenderbotSmallForConditionalGeneration [[autodoc]] BlenderbotSmallForConditionalGeneration - forward ## BlenderbotSmallForCausalLM [[autodoc]] BlenderbotSmallForCausalLM - forward ## TFBlenderbotSmallModel [[autodoc]] TFBlenderbotSmallModel - call ## TFBlenderbotSmallForConditionalGeneration [[autodoc]] TFBlenderbotSmallForConditionalGeneration - call ## FlaxBlenderbotSmallModel [[autodoc]] FlaxBlenderbotSmallModel - __call__ - encode - decode ## FlaxBlenderbotForConditionalGeneration [[autodoc]] FlaxBlenderbotSmallForConditionalGeneration - __call__ - encode - decode
model_doc/mobilevitv2.md
# MobileViTV2 ## Overview The MobileViTV2 model was proposed in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari. MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention. The abstract from the paper is the following: *Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTV2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTV2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running 3.2× faster on a mobile device.* This model was contributed by [shehan97](https://huggingface.co/shehan97). The original code can be found [here](https://github.com/apple/ml-cvnets). ## Usage tips - MobileViTV2 is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. - One can use [`MobileViTImageProcessor`] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB). - The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - The segmentation model uses a [DeepLabV3](https://arxiv.org/abs/1706.05587) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/). ## MobileViTV2Config [[autodoc]] MobileViTV2Config ## MobileViTV2Model [[autodoc]] MobileViTV2Model - forward ## MobileViTV2ForImageClassification [[autodoc]] MobileViTV2ForImageClassification - forward ## MobileViTV2ForSemanticSegmentation [[autodoc]] MobileViTV2ForSemanticSegmentation - forward
model_doc/cvt.md
# Convolutional Vision Transformer (CvT) ## Overview The CvT model was proposed in [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the [Vision Transformer (ViT)](vit) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. The abstract from the paper is the following: *We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (\ie shift, scale, and distortion invariance) while maintaining the merits of Transformers (\ie dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (\eg ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7\% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks.* This model was contributed by [anugunj](https://huggingface.co/anugunj). The original code can be found [here](https://github.com/microsoft/CvT). ## Usage tips - CvT models are regular Vision Transformers, but trained with convolutions. They outperform the [original model (ViT)](vit) when fine-tuned on ImageNet-1K and CIFAR-100. - You can check out demo notebooks regarding inference as well as fine-tuning on custom data [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) (you can just replace [`ViTFeatureExtractor`] by [`AutoImageProcessor`] and [`ViTForImageClassification`] by [`CvtForImageClassification`]). - The available checkpoints are either (1) pre-trained on [ImageNet-22k](http://www.image-net.org/) (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CvT. - [`CvtForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## CvtConfig [[autodoc]] CvtConfig ## CvtModel [[autodoc]] CvtModel - forward ## CvtForImageClassification [[autodoc]] CvtForImageClassification - forward ## TFCvtModel [[autodoc]] TFCvtModel - call ## TFCvtForImageClassification [[autodoc]] TFCvtForImageClassification - call
model_doc/data2vec.md
# Data2Vec ## Overview The Data2Vec model was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images. Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets. The abstract from the paper is the following: *While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. Models and code are available at www.github.com/pytorch/fairseq/tree/master/examples/data2vec.* This model was contributed by [edugp](https://huggingface.co/edugp) and [patrickvonplaten](https://huggingface.co/patrickvonplaten). [sayakpaul](https://github.com/sayakpaul) and [Rocketknight1](https://github.com/Rocketknight1) contributed Data2Vec for vision in TensorFlow. The original code (for NLP and Speech) can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/data2vec). The original code for vision can be found [here](https://github.com/facebookresearch/data2vec_vision/tree/main/beit). ## Usage tips - Data2VecAudio, Data2VecText, and Data2VecVision have all been trained using the same self-supervised learning method. - For Data2VecAudio, preprocessing is identical to [`Wav2Vec2Model`], including feature extraction - For Data2VecText, preprocessing is identical to [`RobertaModel`], including tokenization. - For Data2VecVision, preprocessing is identical to [`BeitModel`], including feature extraction. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Data2Vec. - [`Data2VecVisionForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - To fine-tune [`TFData2VecVisionForImageClassification`] on a custom dataset, see [this notebook](https://colab.research.google.com/github/sayakpaul/TF-2.0-Hacks/blob/master/data2vec_vision_image_classification.ipynb). **Data2VecText documentation resources** - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) **Data2VecAudio documentation resources** - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) **Data2VecVision documentation resources** - [Image classification](../tasks/image_classification) - [Semantic segmentation](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## Data2VecTextConfig [[autodoc]] Data2VecTextConfig ## Data2VecAudioConfig [[autodoc]] Data2VecAudioConfig ## Data2VecVisionConfig [[autodoc]] Data2VecVisionConfig ## Data2VecAudioModel [[autodoc]] Data2VecAudioModel - forward ## Data2VecAudioForAudioFrameClassification [[autodoc]] Data2VecAudioForAudioFrameClassification - forward ## Data2VecAudioForCTC [[autodoc]] Data2VecAudioForCTC - forward ## Data2VecAudioForSequenceClassification [[autodoc]] Data2VecAudioForSequenceClassification - forward ## Data2VecAudioForXVector [[autodoc]] Data2VecAudioForXVector - forward ## Data2VecTextModel [[autodoc]] Data2VecTextModel - forward ## Data2VecTextForCausalLM [[autodoc]] Data2VecTextForCausalLM - forward ## Data2VecTextForMaskedLM [[autodoc]] Data2VecTextForMaskedLM - forward ## Data2VecTextForSequenceClassification [[autodoc]] Data2VecTextForSequenceClassification - forward ## Data2VecTextForMultipleChoice [[autodoc]] Data2VecTextForMultipleChoice - forward ## Data2VecTextForTokenClassification [[autodoc]] Data2VecTextForTokenClassification - forward ## Data2VecTextForQuestionAnswering [[autodoc]] Data2VecTextForQuestionAnswering - forward ## Data2VecVisionModel [[autodoc]] Data2VecVisionModel - forward ## Data2VecVisionForImageClassification [[autodoc]] Data2VecVisionForImageClassification - forward ## Data2VecVisionForSemanticSegmentation [[autodoc]] Data2VecVisionForSemanticSegmentation - forward ## TFData2VecVisionModel [[autodoc]] TFData2VecVisionModel - call ## TFData2VecVisionForImageClassification [[autodoc]] TFData2VecVisionForImageClassification - call ## TFData2VecVisionForSemanticSegmentation [[autodoc]] TFData2VecVisionForSemanticSegmentation - call
model_doc/nllb.md
# NLLB ## Updated tokenizer behavior **DISCLAIMER:** The default behaviour for the tokenizer was fixed and thus changed in April 2023. The previous version adds `[self.eos_token_id, self.cur_lang_code]` at the end of the token sequence for both target and source tokenization. This is wrong as the NLLB paper mentions (page 48, 6.1.1. Model Architecture) : *Note that we prefix the source sequence with the source language, as opposed to the target language as previously done in several works (Arivazhagan et al., 2019; Johnson et al., 2017). This is primarily because we prioritize optimizing zero-shot performance of our model on any pair of 200 languages at a minor cost to supervised performance.* Previous behaviour: thon >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> tokenizer("How was your day?").input_ids [13374, 1398, 4260, 4039, 248130, 2, 256047] >>> # 2: '' >>> # 256047 : 'eng_Latn' New behaviour thon >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> tokenizer("How was your day?").input_ids [256047, 13374, 1398, 4260, 4039, 248130, 2] Enabling the old behaviour can be done as follows: thon >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour=True) For more details, feel free to check the linked [PR](https://github.com/huggingface/transformers/pull/22313) and [Issue](https://github.com/huggingface/transformers/issues/19943). ## Overview The NLLB model was presented in [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. The abstract of the paper is the following: *Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.* This implementation contains the dense models available on release. **The sparse model NLLB-MoE (Mixture of Expert) is now available! More details [here](nllb-moe)** This model was contributed by [Lysandre](https://huggingface.co/lysandre). The authors' code can be found [here](https://github.com/facebookresearch/fairseq/tree/nllb). ## Generating with NLLB While generating the target text set the `forced_bos_token_id` to the target language id. The following example shows how to translate English to French using the *facebook/nllb-200-distilled-600M* model. Note that we're using the BCP-47 code for French `fra_Latn`. See [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200) for the list of all BCP-47 in the Flores 200 dataset. thon >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") >>> article = "UN Chief says there is no military solution in Syria" >>> inputs = tokenizer(article, return_tensors="pt") >>> translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=30 ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie ### Generating from any other language than English English (`eng_Latn`) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language, you should specify the BCP-47 code in the `src_lang` keyword argument of the tokenizer initialization. See example below for a translation from romanian to german: >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "facebook/nllb-200-distilled-600M", token=True, src_lang="ron_Latn" ) >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", token=True) >>> article = "Şeful ONU spune că nu există o soluţie militară în Siria" >>> inputs = tokenizer(article, return_tensors="pt") >>> translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30 ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] UN-Chef sagt, es gibt keine militärische Lösung in Syrien ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## NllbTokenizer [[autodoc]] NllbTokenizer - build_inputs_with_special_tokens ## NllbTokenizerFast [[autodoc]] NllbTokenizerFast
model_doc/m2m_100.md
# M2M100 ## Overview The M2M100 model was proposed in [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. The abstract from the paper is the following: *Existing work in translation demonstrated the potential of massively multilingual machine translation by training a single model able to translate between any pair of languages. However, much of this work is English-Centric by training only on data which was translated from or to English. While this is supported by large sources of training data, it does not reflect translation needs worldwide. In this work, we create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages. We build and open source a training dataset that covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how to effectively increase model capacity through a combination of dense scaling and language-specific sparse parameters to create high quality models. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively to the best single systems of WMT. We open-source our scripts so that others may reproduce the data, evaluation, and final M2M-100 model.* This model was contributed by [valhalla](https://huggingface.co/valhalla). ## Usage tips and examples M2M100 is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation tasks. As the model is multilingual it expects the sequences in a certain format: A special language id token is used as prefix in both the source and target text. The source text format is `[lang_code] X [eos]`, where `lang_code` is source language id for source text and target language id for target text, with `X` being the source or target text. The [`M2M100Tokenizer`] depends on `sentencepiece` so be sure to install it before running the examples. To install `sentencepiece` run `pip install sentencepiece`. **Supervised Training** thon from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="fr") src_text = "Life is like a box of chocolates." tgt_text = "La vie est comme une boîte de chocolat." model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt") loss = model(**model_inputs).loss # forward pass **Generation** M2M100 uses the `eos_token_id` as the `decoder_start_token_id` for generation with the target language id being forced as the first generated token. To force the target language id as the first generated token, pass the *forced_bos_token_id* parameter to the *generate* method. The following example shows how to translate between Hindi to French and Chinese to English using the *facebook/m2m100_418M* checkpoint. thon >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" >>> chinese_text = "生活就像一盒巧克力。" >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") >>> # translate Hindi to French >>> tokenizer.src_lang = "hi" >>> encoded_hi = tokenizer(hi_text, return_tensors="pt") >>> generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "La vie est comme une boîte de chocolat." >>> # translate Chinese to English >>> tokenizer.src_lang = "zh" >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Life is like a box of chocolate." ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## M2M100Config [[autodoc]] M2M100Config ## M2M100Tokenizer [[autodoc]] M2M100Tokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## M2M100Model [[autodoc]] M2M100Model - forward ## M2M100ForConditionalGeneration [[autodoc]] M2M100ForConditionalGeneration - forward
model_doc/perceiver.md
# Perceiver ## Overview The Perceiver IO model was proposed in [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. Perceiver IO is a generalization of [Perceiver](https://arxiv.org/abs/2103.03206) to handle arbitrary outputs in addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio. This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example, Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs. The abstract from the paper is the following: *The recently-proposed Perceiver model obtains good results on several domains (images, audio, multimodal, point clouds) while scaling linearly in compute and memory with the input size. While the Perceiver supports many kinds of inputs, it can only produce very simple outputs such as class scores. Perceiver IO overcomes this limitation without sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce outputs of arbitrary size and semantics. Perceiver IO still decouples model depth from data size and still scales linearly with data size, but now with respect to both input and output sizes. The full Perceiver IO model achieves strong results on tasks with highly structured output spaces, such as natural language and visual understanding, StarCraft II, and multi-task and multi-modal domains. As highlights, Perceiver IO matches a Transformer-based BERT baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation.* Here's a TLDR explaining how Perceiver works: The main problem with the self-attention mechanism of the Transformer is that the time and memory requirements scale quadratically with the sequence length. Hence, models like BERT and RoBERTa are limited to a max sequence length of 512 tokens. Perceiver aims to solve this issue by, instead of performing self-attention on the inputs, perform it on a set of latent variables, and only use the inputs for cross-attention. In this way, the time and memory requirements don't depend on the length of the inputs anymore, as one uses a fixed amount of latent variables, like 256 or 512. These are randomly initialized, after which they are trained end-to-end using backpropagation. Internally, [`PerceiverModel`] will create the latents, which is a tensor of shape `(batch_size, num_latents, d_latents)`. One must provide `inputs` (which could be text, images, audio, you name it!) to the model, which it will use to perform cross-attention with the latents. The output of the Perceiver encoder is a tensor of the same shape. One can then, similar to BERT, convert the last hidden states of the latents to classification logits by averaging along the sequence dimension, and placing a linear layer on top of that to project the `d_latents` to `num_labels`. This was the idea of the original Perceiver paper. However, it could only output classification logits. In a follow-up work, PerceiverIO, they generalized it to let the model also produce outputs of arbitrary size. How, you might ask? The idea is actually relatively simple: one defines outputs of an arbitrary size, and then applies cross-attention with the last hidden states of the latents, using the outputs as queries, and the latents as keys and values. So let's say one wants to perform masked language modeling (BERT-style) with the Perceiver. As the Perceiver's input length will not have an impact on the computation time of the self-attention layers, one can provide raw bytes, providing `inputs` of length 2048 to the model. If one now masks out certain of these 2048 tokens, one can define the `outputs` as being of shape: `(batch_size, 2048, 768)`. Next, one performs cross-attention with the final hidden states of the latents to update the `outputs` tensor. After cross-attention, one still has a tensor of shape `(batch_size, 2048, 768)`. One can then place a regular language modeling head on top, to project the last dimension to the vocabulary size of the model, i.e. creating logits of shape `(batch_size, 2048, 262)` (as Perceiver uses a vocabulary size of 262 byte IDs). Perceiver IO architecture. Taken from the original paper This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Perceiver does **not** work with `torch.nn.DataParallel` due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035) ## Resources - The quickest way to get started with the Perceiver is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Perceiver). - Refer to the [blog post](https://huggingface.co/blog/perceiver) if you want to fully understand how the model works and is implemented in the library. Note that the models available in the library only showcase some examples of what you can do with the Perceiver. There are many more use cases, including question answering, named-entity recognition, object detection, audio classification, video classification, etc. - [Text classification task guide](../tasks/sequence_classification) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Image classification task guide](../tasks/image_classification) ## Perceiver specific outputs [[autodoc]] models.perceiver.modeling_perceiver.PerceiverModelOutput [[autodoc]] models.perceiver.modeling_perceiver.PerceiverDecoderOutput [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput [[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassifierOutput ## PerceiverConfig [[autodoc]] PerceiverConfig ## PerceiverTokenizer [[autodoc]] PerceiverTokenizer - __call__ ## PerceiverFeatureExtractor [[autodoc]] PerceiverFeatureExtractor - __call__ ## PerceiverImageProcessor [[autodoc]] PerceiverImageProcessor - preprocess ## PerceiverTextPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverTextPreprocessor ## PerceiverImagePreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverImagePreprocessor ## PerceiverOneHotPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverOneHotPreprocessor ## PerceiverAudioPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor ## PerceiverMultimodalPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor ## PerceiverProjectionDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionDecoder ## PerceiverBasicDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicDecoder ## PerceiverClassificationDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationDecoder ## PerceiverOpticalFlowDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder ## PerceiverBasicVideoAutoencodingDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicVideoAutoencodingDecoder ## PerceiverMultimodalDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder ## PerceiverProjectionPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor ## PerceiverAudioPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor ## PerceiverClassificationPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor ## PerceiverMultimodalPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor ## PerceiverModel [[autodoc]] PerceiverModel - forward ## PerceiverForMaskedLM [[autodoc]] PerceiverForMaskedLM - forward ## PerceiverForSequenceClassification [[autodoc]] PerceiverForSequenceClassification - forward ## PerceiverForImageClassificationLearned [[autodoc]] PerceiverForImageClassificationLearned - forward ## PerceiverForImageClassificationFourier [[autodoc]] PerceiverForImageClassificationFourier - forward ## PerceiverForImageClassificationConvProcessing [[autodoc]] PerceiverForImageClassificationConvProcessing - forward ## PerceiverForOpticalFlow [[autodoc]] PerceiverForOpticalFlow - forward ## PerceiverForMultimodalAutoencoding [[autodoc]] PerceiverForMultimodalAutoencoding - forward
model_doc/yolos.md
# YOLOS ## Overview The YOLOS model was proposed in [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. YOLOS proposes to just leverage the plain [Vision Transformer (ViT)](vit) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN. The abstract from the paper is the following: *Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS.* YOLOS architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/hustvl/YOLOS). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS. - All example notebooks illustrating inference + fine-tuning [`YolosForObjectDetection`] on a custom dataset can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/YOLOS). - See also: [Object detection task guide](../tasks/object_detection) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Use [`YolosImageProcessor`] for preparing images (and optional targets) for the model. Contrary to [DETR](detr), YOLOS doesn't require a `pixel_mask` to be created. ## YolosConfig [[autodoc]] YolosConfig ## YolosImageProcessor [[autodoc]] YolosImageProcessor - preprocess - pad - post_process_object_detection ## YolosFeatureExtractor [[autodoc]] YolosFeatureExtractor - __call__ - pad - post_process_object_detection ## YolosModel [[autodoc]] YolosModel - forward ## YolosForObjectDetection [[autodoc]] YolosForObjectDetection - forward
model_doc/vision-encoder-decoder.md
# Vision Encoder Decoder Models ## Overview The [`VisionEncoderDecoderModel`] can be used to initialize an image-to-text model with any pretrained Transformer-based vision model as the encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit), [Swin](swin)) and any pretrained language model as the decoder (*e.g.* [RoBERTa](roberta), [GPT2](gpt2), [BERT](bert), [DistilBERT](distilbert)). The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. After such a [`VisionEncoderDecoderModel`] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below for more information). An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates the caption. Another example is optical character recognition. Refer to [TrOCR](trocr), which is an instance of [`VisionEncoderDecoderModel`]. ## Randomly initializing `VisionEncoderDecoderModel` from model configurations. [`VisionEncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`ViTModel`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. thon >>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel >>> config_encoder = ViTConfig() >>> config_decoder = BertConfig() >>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = VisionEncoderDecoderModel(config=config) ## Initialising `VisionEncoderDecoderModel` from a pretrained encoder and a pretrained decoder. [`VisionEncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, *e.g.* [Swin](swin), can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [`VisionEncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `VisionEncoderDecoderModel` class provides a [`VisionEncoderDecoderModel.from_encoder_decoder_pretrained`] method. thon >>> from transformers import VisionEncoderDecoderModel >>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( "microsoft/swin-base-patch4-window7-224-in22k", "bert-base-uncased" ) ## Loading an existing `VisionEncoderDecoderModel` checkpoint and perform inference. To load fine-tuned checkpoints of the `VisionEncoderDecoderModel` class, [`VisionEncoderDecoderModel`] provides the `from_pretrained()` method just like any other model architecture in Transformers. To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. thon >>> import requests >>> from PIL import Image >>> from transformers import GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel >>> # load a fine-tuned image captioning model and corresponding tokenizer and image processor >>> model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> tokenizer = GPT2TokenizerFast.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> image_processor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> # let's perform inference on an image >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values >>> # autoregressively generate caption (uses greedy decoding by default) >>> generated_ids = model.generate(pixel_values) >>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) a cat laying on a blanket next to a cat laying on a bed ## Loading a PyTorch checkpoint into `TFVisionEncoderDecoderModel`. [`TFVisionEncoderDecoderModel.from_pretrained`] currently doesn't support initializing the model from a PyTorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only PyTorch checkpoints for a particular vision encoder-decoder model, a workaround is: thon >>> from transformers import VisionEncoderDecoderModel, TFVisionEncoderDecoderModel >>> _model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> _model.encoder.save_pretrained("./encoder") >>> _model.decoder.save_pretrained("./decoder") >>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ) >>> # This is only for copying some specific attributes of this particular model. >>> model.config = _model.config ## Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: `pixel_values` (which are the images) and `labels` (which are the `input_ids` of the encoded target sequence). thon >>> from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> image_processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k") >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( "google/vit-base-patch16-224-in21k", "bert-base-uncased" ) >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values >>> labels = tokenizer( "an image of two cats chilling on a couch", return_tensors="pt", ).input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(pixel_values=pixel_values, labels=labels).loss This model was contributed by [nielsr](https://github.com/nielsrogge). This model's TensorFlow and Flax versions were contributed by [ydshieh](https://github.com/ydshieh). ## VisionEncoderDecoderConfig [[autodoc]] VisionEncoderDecoderConfig ## VisionEncoderDecoderModel [[autodoc]] VisionEncoderDecoderModel - forward - from_encoder_decoder_pretrained ## TFVisionEncoderDecoderModel [[autodoc]] TFVisionEncoderDecoderModel - call - from_encoder_decoder_pretrained ## FlaxVisionEncoderDecoderModel [[autodoc]] FlaxVisionEncoderDecoderModel - __call__ - from_encoder_decoder_pretrained
model_doc/codegen.md
# CodeGen ## Overview The CodeGen model was proposed in [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. CodeGen is an autoregressive language model for program synthesis trained sequentially on [The Pile](https://pile.eleuther.ai/), BigQuery, and BigPython. The abstract from the paper is the following: *Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We make the training library JaxFormer including checkpoints available as open source contribution: [this https URL](https://github.com/salesforce/codegen).* This model was contributed by [Hiroaki Hayashi](https://huggingface.co/rooa). The original code can be found [here](https://github.com/salesforce/codegen). ## Checkpoint Naming * CodeGen model [checkpoints](https://huggingface.co/models?other=codegen) are available on different pre-training data with variable sizes. * The format is: `Salesforce/codegen-{size}-{data}`, where * `size`: `350M`, `2B`, `6B`, `16B` * `data`: * `nl`: Pre-trained on the Pile * `multi`: Initialized with `nl`, then further pre-trained on multiple programming languages data * `mono`: Initialized with `multi`, then further pre-trained on Python data * For example, `Salesforce/codegen-350M-mono` offers a 350 million-parameter checkpoint pre-trained sequentially on the Pile, multiple programming languages, and Python. ## Usage example thon >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> checkpoint = "Salesforce/codegen-350M-mono" >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> text = "def hello_world():" >>> completion = model.generate(**tokenizer(text, return_tensors="pt")) >>> print(tokenizer.decode(completion[0])) def hello_world(): print("Hello World") hello_world() ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## CodeGenConfig [[autodoc]] CodeGenConfig - all ## CodeGenTokenizer [[autodoc]] CodeGenTokenizer - save_vocabulary ## CodeGenTokenizerFast [[autodoc]] CodeGenTokenizerFast ## CodeGenModel [[autodoc]] CodeGenModel - forward ## CodeGenForCausalLM [[autodoc]] CodeGenForCausalLM - forward
model_doc/dpt.md
# DPT ## Overview The DPT model was proposed in [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. DPT is a model that leverages the [Vision Transformer (ViT)](vit) as backbone for dense prediction tasks like semantic segmentation and depth estimation. The abstract from the paper is the following: *We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art.* DPT architecture. Taken from the original paper. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/isl-org/DPT). ## Usage tips DPT is compatible with the [`AutoBackbone`] class. This allows to use the DPT framework with various computer vision backbones available in the library, such as [`VitDetBackbone`] or [`Dinov2Backbone`]. One can create it as follows: thon from transformers import Dinov2Config, DPTConfig, DPTForDepthEstimation # initialize with a Transformer-based backbone such as DINOv2 # in that case, we also specify `reshape_hidden_states=False` to get feature maps of shape (batch_size, num_channels, height, width) backbone_config = Dinov2Config.from_pretrained("facebook/dinov2-base", out_features=["stage1", "stage2", "stage3", "stage4"], reshape_hidden_states=False) config = DPTConfig(backbone_config=backbone_config) model = DPTForDepthEstimation(config=config) ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT. - Demo notebooks for [`DPTForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DPT). - [Semantic segmentation task guide](../tasks/semantic_segmentation) - [Monocular depth estimation task guide](../tasks/monocular_depth_estimation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DPTConfig [[autodoc]] DPTConfig ## DPTFeatureExtractor [[autodoc]] DPTFeatureExtractor - __call__ - post_process_semantic_segmentation ## DPTImageProcessor [[autodoc]] DPTImageProcessor - preprocess - post_process_semantic_segmentation ## DPTModel [[autodoc]] DPTModel - forward ## DPTForDepthEstimation [[autodoc]] DPTForDepthEstimation - forward ## DPTForSemanticSegmentation [[autodoc]] DPTForSemanticSegmentation - forward
main_classes/deepspeed.md
# DeepSpeed Integration [DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Currently it provides full support for: 1. Optimizer state partitioning (ZeRO stage 1) 2. Gradient partitioning (ZeRO stage 2) 3. Parameter partitioning (ZeRO stage 3) 4. Custom mixed precision training handling 5. A range of fast CUDA-extension-based optimizers 6. ZeRO-Offload to CPU and NVMe ZeRO-Offload has its own dedicated paper: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840). And NVMe-support is described in the paper [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857). DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no use to inference. DeepSpeed ZeRO-3 can be used for inference as well, since it allows huge models to be loaded on multiple GPUs, which won't be possible on a single GPU. 🤗 Transformers integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options: 1. Integration of the core DeepSpeed features via [`Trainer`]. This is an everything-done-for-you type of integration - just supply your custom config file or use our template and you have nothing else to do. Most of this document is focused on this feature. 2. If you don't use [`Trainer`] and want to use your own Trainer where you integrated DeepSpeed yourself, core functionality functions like `from_pretrained` and `from_config` include integration of essential parts of DeepSpeed like `zero.Init` for ZeRO stage 3 and higher. To tap into this feature read the docs on [non-Trainer DeepSpeed Integration](#nontrainer-deepspeed-integration). What is integrated: Training: 1. DeepSpeed ZeRO training supports the full ZeRO stages 1, 2 and 3 with ZeRO-Infinity (CPU and NVME offload). Inference: 1. DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity. It uses the same ZeRO protocol as training, but it doesn't use an optimizer and a lr scheduler and only stage 3 is relevant. For more details see: [zero-inference](#zero-inference). There is also DeepSpeed Inference - this is a totally different technology which uses Tensor Parallelism instead of ZeRO (coming soon). ## Trainer Deepspeed Integration ### Installation Install the library via pypi: ```bash pip install deepspeed or via `transformers`' `extras`: ```bash pip install transformers[deepspeed] or find more details on [the DeepSpeed's GitHub page](https://github.com/microsoft/deepspeed#installation) and [advanced install](https://www.deepspeed.ai/tutorials/advanced-install/). If you're still struggling with the build, first make sure to read [CUDA Extension Installation Notes](trainer#cuda-extension-installation-notes). If you don't prebuild the extensions and rely on them to be built at run time and you tried all of the above solutions to no avail, the next thing to try is to pre-build the modules before installing them. To make a local build for DeepSpeed: ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \ --global-option="build_ext" --global-option="-j8" --no-cache -v \ --disable-pip-version-check 2>&1 | tee build.log If you intend to use NVMe offload you will also need to include `DS_BUILD_AIO=1` in the instructions above (and also install *libaio-dev* system-wide). Edit `TORCH_CUDA_ARCH_LIST` to insert the code for the architectures of the GPU cards you intend to use. Assuming all your cards are the same you can get the arch via: ```bash CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())" So if you get `8, 6`, then use `TORCH_CUDA_ARCH_LIST="8.6"`. If you have multiple different cards, you can list all of them like so `TORCH_CUDA_ARCH_LIST="6.1;8.6"` If you need to use the same setup on multiple machines, make a binary wheel: ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \ python setup.py build_ext -j8 bdist_wheel it will generate something like `dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl` which now you can install as `pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl` locally or on any other machine. Again, remember to ensure to adjust `TORCH_CUDA_ARCH_LIST` to the target architectures. You can find the complete list of NVIDIA GPUs and their corresponding **Compute Capabilities** (same as arch in this context) [here](https://developer.nvidia.com/cuda-gpus). You can check the archs pytorch was built with using: ```bash python -c "import torch; print(torch.cuda.get_arch_list())" Here is how to find out the arch for one of the installed GPUs. For example, for GPU 0: ```bash CUDA_VISIBLE_DEVICES=0 python -c "import torch; \ print(torch.cuda.get_device_properties(torch.device('cuda')))" If the output is: ```bash _CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82) then you know that this card's arch is `8.6`. You can also leave `TORCH_CUDA_ARCH_LIST` out completely and then the build program will automatically query the architecture of the GPUs the build is made on. This may or may not match the GPUs on the target machines, that's why it's best to specify the desired archs explicitly. If after trying everything suggested you still encounter build issues, please, proceed with the GitHub Issue of [Deepspeed](https://github.com/microsoft/DeepSpeed/issues), ### Deployment with multiple GPUs To deploy the DeepSpeed integration adjust the [`Trainer`] command line arguments to include a new argument `--deepspeed ds_config.json`, where `ds_config.json` is the DeepSpeed configuration file as documented [here](https://www.deepspeed.ai/docs/config-json/). The file naming is up to you. It's recommended to use DeepSpeed's `add_config_arguments` utility to add the necessary command line arguments to your code. For more information please see [DeepSpeed's Argument Parsing](https://deepspeed.readthedocs.io/en/latest/initialize.html#argument-parsing) doc. You can use a launcher of your choice here. You can continue using the pytorch launcher: ```bash torch.distributed.run --nproc_per_node=2 your_program.py --deepspeed ds_config.json or use the launcher provided by `deepspeed`: ```bash deepspeed --num_gpus=2 your_program.py --deepspeed ds_config.json As you can see the arguments aren't the same, but for most needs either of them works. The full details on how to configure various nodes and GPUs can be found [here](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). When you use the `deepspeed` launcher and you want to use all available gpus you can just omit the `--num_gpus` flag. Here is an example of running `run_translation.py` under DeepSpeed deploying all available GPUs: ```bash deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro Note that in the DeepSpeed documentation you are likely to see `--deepspeed --deepspeed_config ds_config.json` - i.e. two DeepSpeed-related arguments, but for the sake of simplicity, and since there are already so many arguments to deal with, we combined the two into a single argument. For some practical usage examples, please, see this [post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400). ### Deployment with one GPU To deploy DeepSpeed with one GPU adjust the [`Trainer`] command line arguments as follows: ```bash deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero2.json \ --model_name_or_path t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro This is almost the same as with multiple-GPUs, but here we tell DeepSpeed explicitly to use just one GPU via `--num_gpus=1`. By default, DeepSpeed deploys all GPUs it can see on the given node. If you have only 1 GPU to start with, then you don't need this argument. The following [documentation](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) discusses the launcher options. Why would you want to use DeepSpeed with just one GPU? 1. It has a ZeRO-offload feature which can delegate some computations and memory to the host's CPU and RAM, and thus leave more GPU resources for model's needs - e.g. larger batch size, or enabling a fitting of a very big model which normally won't fit. 2. It provides a smart GPU memory management system, that minimizes memory fragmentation, which again allows you to fit bigger models and data batches. While we are going to discuss the configuration in details next, the key to getting a huge improvement on a single GPU with DeepSpeed is to have at least the following configuration in the configuration file: ```json { "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "reduce_scatter": true, "reduce_bucket_size": 2e8, "overlap_comm": true, "contiguous_gradients": true } } which enables optimizer offload and some other important features. You may experiment with the buffer sizes, you will find more details in the discussion below. For a practical usage example of this type of deployment, please, see this [post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685). You may also try the ZeRO-3 with CPU and NVMe offload as explained further in this document. Notes: - if you need to run on a specific GPU, which is different from GPU 0, you can't use `CUDA_VISIBLE_DEVICES` to limit the visible scope of available GPUs. Instead, you have to use the following syntax: ```bash deepspeed --include localhost:1 examples/pytorch/translation/run_translation.py In this example, we tell DeepSpeed to use GPU 1 (second gpu). ### Deployment with multiple Nodes The information in this section isn't not specific to the DeepSpeed integration and is applicable to any multi-node program. But DeepSpeed provides a `deepspeed` launcher that is easier to use than other launchers unless you are in a SLURM environment. For the duration of this section let's assume that you have 2 nodes with 8 gpus each. And you can reach the first node with `ssh hostname1` and second node with `ssh hostname2`, and both must be able to reach each other via ssh locally without a password. Of course, you will need to rename these host (node) names to the actual host names you are working with. #### The torch.distributed.run launcher For example, to use `torch.distributed.run`, you could do: ```bash python -m torch.distributed.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \ --master_port=9901 your_program.py --deepspeed ds_config.json You have to ssh to each node and run this same command on each one of them! There is no rush, the launcher will wait until both nodes will synchronize. For more information please see [torchrun](https://pytorch.org/docs/stable/elastic/run.html). Incidentally, this is also the launcher that replaced `torch.distributed.launch` a few pytorch versions back. #### The deepspeed launcher To use the `deepspeed` launcher instead, you have to first create a `hostfile` file: hostname1 slots=8 hostname2 slots=8 and then you can launch it as: ```bash deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \ your_program.py --deepspeed ds_config.json Unlike the `torch.distributed.run` launcher, `deepspeed` will automatically launch this command on both nodes! For more information please see [Resource Configuration (multi-node)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). #### Launching in a SLURM environment In the SLURM environment the following approach can be used. The following is a slurm script `launch.slurm` which you will need to adapt it to your specific SLURM environment. ```bash #SBATCH --job-name=test-nodes # name #SBATCH --nodes=2 # nodes #SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node! #SBATCH --cpus-per-task=10 # number of cores per tasks #SBATCH --gres=gpu:8 # number of gpus #SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS) #SBATCH --output=%x-%j.out # output file name export GPUS_PER_NODE=8 export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1) export MASTER_PORT=9901 srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \ --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \ --master_addr $MASTER_ADDR --master_port $MASTER_PORT \ your_program.py --deepspeed ds_config.json' All is left is to schedule it to run: ```bash sbatch launch.slurm `srun` will take care of launching the program simultaneously on all nodes. #### Use of Non-shared filesystem By default DeepSpeed expects that a multi-node environment uses a shared storage. If this is not the case and each node can only see the local filesystem, you need to adjust the config file to include a [`checkpoint`_section](https://www.deepspeed.ai/docs/config-json/#checkpoint-options) with the following setting: ```json { "checkpoint": { "use_node_local_storage": true } } Alternatively, you can also use the [`Trainer`]'s `--save_on_each_node` argument, and the above config will be added automatically for you. ### Deployment in Notebooks The problem with running notebook cells as a script is that there is no normal `deepspeed` launcher to rely on, so under certain setups we have to emulate it. If you're using only 1 GPU, here is how you'd have to adjust your training code in the notebook to use DeepSpeed. thon # DeepSpeed requires a distributed environment even when only one process is used. # This emulates a launcher in the notebook import os os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "9994" # modify if RuntimeError: Address already in use os.environ["RANK"] = "0" os.environ["LOCAL_RANK"] = "0" os.environ["WORLD_SIZE"] = "1" # Now proceed as normal, plus pass the deepspeed config file training_args = TrainingArguments(, deepspeed="ds_config_zero3.json") trainer = Trainer() trainer.train() Note: `` stands for the normal arguments that you'd pass to the functions. If you want to use more than 1 GPU, you must use a multi-process environment for DeepSpeed to work. That is, you have to use the launcher for that purpose and this cannot be accomplished by emulating the distributed environment presented at the beginning of this section. If you want to create the config file on the fly in the notebook in the current directory, you could have a dedicated cell with: thon no-style %%bash cat <<'EOT' > ds_config_zero3.json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } EOT If the training script is in a normal file and not in the notebook cells, you can launch `deepspeed` normally via shell from a cell. For example, to use `run_translation.py` you would launch it with: thon no-style !git clone https://github.com/huggingface/transformers !cd transformers; deepspeed examples/pytorch/translation/run_translation.py or with `%%bash` magic, where you can write a multi-line code for the shell program to run: thon no-style %%bash git clone https://github.com/huggingface/transformers cd transformers deepspeed examples/pytorch/translation/run_translation.py In such case you don't need any of the code presented at the beginning of this section. Note: While `%%bash` magic is neat, but currently it buffers the output so you won't see the logs until the process completes. ### Configuration For the complete guide to the DeepSpeed configuration options that can be used in its configuration file please refer to the [following documentation](https://www.deepspeed.ai/docs/config-json/). You can find dozens of DeepSpeed configuration examples that address various practical needs in [the DeepSpeedExamples repo](https://github.com/microsoft/DeepSpeedExamples): ```bash git clone https://github.com/microsoft/DeepSpeedExamples cd DeepSpeedExamples find . -name '*json' Continuing the code from above, let's say you're looking to configure the Lamb optimizer. So you can search through the example `.json` files with: ```bash grep -i Lamb $(find . -name '*json') Some more examples are to be found in the [main repo](https://github.com/microsoft/DeepSpeed) as well. When using DeepSpeed you always need to supply a DeepSpeed configuration file, yet some configuration parameters have to be configured via the command line. You will find the nuances in the rest of this guide. To get an idea of what DeepSpeed configuration file looks like, here is one that activates ZeRO stage 2 features, including optimizer states cpu offload, uses `AdamW` optimizer and `WarmupLR` scheduler and will enable mixed precision training if `--fp16` is passed: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", } When you execute the program, DeepSpeed will log the configuration it received from the [`Trainer`] to the console, so you can see exactly what was the final configuration passed to it. ### Passing Configuration As discussed in this document normally the DeepSpeed configuration is passed as a path to a json file, but if you're not using the command line interface to configure the training, and instead instantiate the [`Trainer`] via [`TrainingArguments`] then for the `deepspeed` argument you can pass a nested `dict`. This allows you to create the configuration on the fly and doesn't require you to write it to the file system before passing it to [`TrainingArguments`]. To summarize you can do: thon TrainingArguments(, deepspeed="/path/to/ds_config.json") or: thon ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params) TrainingArguments(, deepspeed=ds_config_dict) ### Shared Configuration This section is a must-read Some configuration values are required by both the [`Trainer`] and DeepSpeed to function correctly, therefore, to prevent conflicting definitions, which could lead to hard to detect errors, we chose to configure those via the [`Trainer`] command line arguments. Additionally, some configuration values are derived automatically based on the model's configuration, so instead of remembering to manually adjust multiple values, it's the best to let the [`Trainer`] do the majority of configuration for you. Therefore, in the rest of this guide you will find a special configuration value: `auto`, which when set will be automatically replaced with the correct or most efficient value. Please feel free to choose to ignore this recommendation and set the values explicitly, in which case be very careful that your the [`Trainer`] arguments and DeepSpeed configurations agree. For example, are you using the same learning rate, or batch size, or gradient accumulation settings? if these mismatch the training may fail in very difficult to detect ways. You have been warned. There are multiple other values that are specific to DeepSpeed-only and those you will have to set manually to suit your needs. In your own programs, you can also use the following approach if you'd like to modify the DeepSpeed config as a master and configure [`TrainingArguments`] based on that. The steps are: 1. Create or load the DeepSpeed configuration to be used as a master configuration 2. Create the [`TrainingArguments`] object based on these values Do note that some values, such as `scheduler.params.total_num_steps` are calculated by [`Trainer`] during `train`, but you can of course do the math yourself. ### ZeRO [Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/) is the workhorse of DeepSpeed. It supports 3 different levels (stages) of optimization. The first one is not quite interesting for scalability purposes, therefore this document focuses on stages 2 and 3. Stage 3 is further improved by the latest addition of ZeRO-Infinity. You will find more indepth information in the DeepSpeed documentation. The `zero_optimization` section of the configuration file is the most important part ([docs](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training)), since that is where you define which ZeRO stages you want to enable and how to configure them. You will find the explanation for each parameter in the DeepSpeed docs. This section has to be configured exclusively via DeepSpeed configuration - the [`Trainer`] provides no equivalent command line arguments. Note: currently DeepSpeed doesn't validate parameter names, so if you misspell any, it'll use the default setting for the parameter that got misspelled. You can watch the DeepSpeed engine start up log messages to see what values it is going to use. #### ZeRO-2 Config The following is an example of configuration for ZeRO stage 2: ```json { "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 5e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 5e8, "contiguous_gradients": true } } **Performance tuning:** - enabling `offload_optimizer` should reduce GPU RAM usage (it requires `"stage": 2`) - `"overlap_comm": true` trades off increased GPU RAM usage to lower all-reduce latency. `overlap_comm` uses 4.5x the `allgather_bucket_size` and `reduce_bucket_size` values. So if they are set to 5e8, this requires a 9GB footprint (`5e8 x 2Bytes x 2 x 4.5`). Therefore, if you have a GPU with 8GB or less RAM, to avoid getting OOM-errors you will need to reduce those parameters to about `2e8`, which would require 3.6GB. You will want to do the same on larger capacity GPU as well, if you're starting to hit OOM. - when reducing these buffers you're trading communication speed to avail more GPU RAM. The smaller the buffer size is, the slower the communication gets, and the more GPU RAM will be available to other tasks. So if a bigger batch size is important, getting a slightly slower training time could be a good trade. Additionally, `deepspeed==0.4.4` added a new option `round_robin_gradients` which you can enable with: ```json { "zero_optimization": { "round_robin_gradients": true } } This is a stage 2 optimization for CPU offloading that parallelizes gradient copying to CPU memory among ranks by fine-grained gradient partitioning. Performance benefit grows with gradient accumulation steps (more copying between optimizer steps) or GPU count (increased parallelism). #### ZeRO-3 Config The following is an example of configuration for ZeRO stage 3: ```json { "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true } } If you are getting OOMs, because your model or activations don't fit into the GPU memory and you have unutilized CPU memory offloading the optimizer states and parameters to CPU memory with `"device": "cpu"` may solve this limitation. If you don't want to offload to CPU memory, use `none` instead of `cpu` for the `device` entry. Offloading to NVMe is discussed further down. Pinned memory is enabled with `pin_memory` set to `true`. This feature can improve the throughput at the cost of making less memory available to other processes. Pinned memory is set aside to the specific process that requested it and its typically accessed much faster than normal CPU memory. **Performance tuning:** - `stage3_max_live_parameters`: `1e9` - `stage3_max_reuse_distance`: `1e9` If hitting OOM reduce `stage3_max_live_parameters` and `stage3_max_reuse_distance`. They should have minimal impact on performance unless you are doing activation checkpointing. `1e9` would consume ~2GB. The memory is shared by `stage3_max_live_parameters` and `stage3_max_reuse_distance`, so it's not additive, it's just 2GB total. `stage3_max_live_parameters` is the upper limit on how many full parameters you want to keep on the GPU at any given time. "reuse distance" is a metric we are using to figure out when will a parameter be used again in the future, and we use the `stage3_max_reuse_distance` to decide whether to throw away the parameter or to keep it. If a parameter is going to be used again in near future (less than `stage3_max_reuse_distance`) then we keep it to reduce communication overhead. This is super helpful when you have activation checkpointing enabled, where we do a forward recompute and backward passes a single layer granularity and want to keep the parameter in the forward recompute till the backward The following configuration values depend on the model's hidden size: - `reduce_bucket_size`: `hidden_size*hidden_size` - `stage3_prefetch_bucket_size`: `0.9 * hidden_size * hidden_size` - `stage3_param_persistence_threshold`: `10 * hidden_size` therefore set these values to `auto` and the [`Trainer`] will automatically assign the recommended values. But, of course, feel free to set these explicitly as well. `stage3_gather_16bit_weights_on_model_save` enables model fp16 weights consolidation when model gets saved. With large models and multiple GPUs this is an expensive operation both in terms of memory and speed. It's currently required if you plan to resume the training. Watch out for future updates that will remove this limitation and make things more flexible. If you're migrating from ZeRO-2 configuration note that `allgather_partitions`, `allgather_bucket_size` and `reduce_scatter` configuration parameters are not used in ZeRO-3. If you keep these in the config file they will just be ignored. - `sub_group_size`: `1e9` `sub_group_size` controls the granularity in which parameters are updated during optimizer steps. Parameters are grouped into buckets of `sub_group_size` and each buckets is updated one at a time. When used with NVMe offload in ZeRO-Infinity, `sub_group_size` therefore controls the granularity in which model states are moved in and out of CPU memory from NVMe during the optimizer step. This prevents running out of CPU memory for extremely large models. You can leave `sub_group_size` to its default value of *1e9* when not using NVMe offload. You may want to change its default value in the following cases: 1. Running into OOM during optimizer step: Reduce `sub_group_size` to reduce memory utilization of temporary buffers 2. Optimizer Step is taking a long time: Increase `sub_group_size` to improve bandwidth utilization as a result of the increased data buffers. #### ZeRO-0 Config Note that we're listing Stage 0 and 1 last since they are rarely used. Stage 0 is disabling all types of sharding and just using DeepSpeed as DDP. You can turn it on with: ```json { "zero_optimization": { "stage": 0 } } This will essentially disable ZeRO without you needing to change anything else. #### ZeRO-1 Config Stage 1 is Stage 2 minus gradient sharding. You can always try it to speed things a tiny bit to only shard the optimizer states with: ```json { "zero_optimization": { "stage": 1 } } ### NVMe Support ZeRO-Infinity allows for training incredibly large models by extending GPU and CPU memory with NVMe memory. Thanks to smart partitioning and tiling algorithms each GPU needs to send and receive very small amounts of data during offloading so modern NVMe proved to be fit to allow for an even larger total memory pool available to your training process. ZeRO-Infinity requires ZeRO-3 enabled. The following configuration example enables NVMe to offload both optimizer states and the params: ```json { "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "nvme", "nvme_path": "/local_nvme", "pin_memory": true, "buffer_count": 4, "fast_init": false }, "offload_param": { "device": "nvme", "nvme_path": "/local_nvme", "pin_memory": true, "buffer_count": 5, "buffer_size": 1e8, "max_in_cpu": 1e9 }, "aio": { "block_size": 262144, "queue_depth": 32, "thread_count": 1, "single_submit": false, "overlap_events": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, } You can choose to offload both optimizer states and params to NVMe, or just one of them or none. For example, if you have copious amounts of CPU memory available, by all means offload to CPU memory only as it'd be faster (hint: *"device": "cpu"*). Here is the full documentation for offloading [optimizer states](https://www.deepspeed.ai/docs/config-json/#optimizer-offloading) and [parameters](https://www.deepspeed.ai/docs/config-json/#parameter-offloading). Make sure that your `nvme_path` is actually an NVMe, since it will work with the normal hard drive or SSD, but it'll be much much slower. The fast scalable training was designed with modern NVMe transfer speeds in mind (as of this writing one can have ~3.5GB/s read, ~3GB/s write peak speeds). In order to figure out the optimal `aio` configuration block you must run a benchmark on your target setup, as [explained here](https://github.com/microsoft/DeepSpeed/issues/998). #### ZeRO-2 vs ZeRO-3 Performance ZeRO-3 is likely to be slower than ZeRO-2 if everything else is configured the same because the former has to gather model weights in addition to what ZeRO-2 does. If ZeRO-2 meets your needs and you don't need to scale beyond a few GPUs then you may choose to stick to it. It's important to understand that ZeRO-3 enables a much higher scalability capacity at a cost of speed. It's possible to adjust ZeRO-3 configuration to make it perform closer to ZeRO-2: - set `stage3_param_persistence_threshold` to a very large number - larger than the largest parameter, e.g., `6 * hidden_size * hidden_size`. This will keep the parameters on the GPUs. - turn off `offload_params` since ZeRO-2 doesn't have that option. The performance will likely improve significantly with just `offload_params` turned off, even if you don't change `stage3_param_persistence_threshold`. Of course, these changes will impact the size of the model you can train. So these help you to trade scalability for speed depending on your needs. #### ZeRO-2 Example Here is a full ZeRO-2 auto-configuration file `ds_config_zero2.json`: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Here is a full ZeRO-2 all-enabled manually set configuration file. It is here mainly for you to see what the typical values look like, but we highly recommend using the one with multiple `auto` settings in it. ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": 3e-5, "betas": [0.8, 0.999], "eps": 1e-8, "weight_decay": 3e-7 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 3e-5, "warmup_num_steps": 500 } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "steps_per_print": 2000, "wall_clock_breakdown": false } #### ZeRO-3 Example Here is a full ZeRO-3 auto-configuration file `ds_config_zero3.json`: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Here is a full ZeRO-3 all-enabled manually set configuration file. It is here mainly for you to see what the typical values look like, but we highly recommend using the one with multiple `auto` settings in it. ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": 3e-5, "betas": [0.8, 0.999], "eps": 1e-8, "weight_decay": 3e-7 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 3e-5, "warmup_num_steps": 500 } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": 1e6, "stage3_prefetch_bucket_size": 0.94e6, "stage3_param_persistence_threshold": 1e4, "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "steps_per_print": 2000, "wall_clock_breakdown": false } #### How to Choose Which ZeRO Stage and Offloads To Use For Best Performance So now you know there are all these different stages. How to decide which of them to use? This section will attempt to address this question. In general the following applies: - Speed-wise (left is faster than right) Stage 0 (DDP) > Stage 1 > Stage 2 > Stage 2 + offload > Stage 3 > Stage 3 + offloads - GPU Memory usage-wise (right is more GPU memory efficient than left) Stage 0 (DDP) < Stage 1 < Stage 2 < Stage 2 + offload < Stage 3 < Stage 3 + offloads So when you want to get the fastest execution while fitting into minimal number of GPUs, here is the process you could follow. We start with the fastest approach and if running into GPU OOM we then go to the next slower approach, but which will use less GPU memory. And so on and so forth. First of all set batch size to 1 (you can always use gradient accumulation for any desired effective batch size). 1. Enable `--gradient_checkpointing 1` (HF Trainer) or directly `model.gradient_checkpointing_enable()` - if OOM then 2. Try ZeRO stage 2 first. if OOM then 3. Try ZeRO stage 2 + `offload_optimizer` - if OOM then 4. Switch to ZeRO stage 3 - if OOM then 5. Enable `offload_param` to `cpu` - if OOM then 6. Enable `offload_optimizer` to `cpu` - if OOM then 7. If you still can't fit a batch size of 1 first check various default values and lower them if you can. For example, if you use `generate` and you don't use a wide search beam make it narrower as it'd take a lot of memory. 8. Definitely use mixed half-precision over fp32 - so bf16 on Ampere and higher GPUs and fp16 on older gpu architectures. 9. If you still OOM you could add more hardware or enable ZeRO-Infinity - that is switch offloads `offload_param` and `offload_optimizer` to `nvme`. You need to make sure it's a very fast nvme. As an anecdote I was able to infer BLOOM-176B on a tiny GPU using ZeRO-Infinity except it was extremely slow. But it worked! You can, of course, work through these steps in reverse by starting with the most GPU memory efficient config and then going backwards. Or try bi-secting it. Once you have your batch size 1 not leading to OOM, measure your effective throughput. Next try to increase the batch size to as large as you can, since the higher the batch size the more efficient the GPUs are as they perform the best when matrices they multiply are huge. Now the performance optimization game starts. You can turn off some offload features or step down in ZeRO stages and increase/decrease batch size and again measure your effective throughput. Rinse and repeat until satisfied. Don't spend forever on it, but if you're about to start a 3 months training - do spend a few days on it to find the most effective throughput-wise setup. So that your training cost will be the lowest and you will finish training faster. In the current crazy-paced ML world, if it takes you an extra month to train something you are likely to miss a golden opportunity. Of course, this is only me sharing an observation and in no way I'm trying to rush you. Before beginning to train BLOOM-176B I spent 2 days on this process and was able to increase throughput from 90 to 150 TFLOPs! This effort saved us more than one month of training time. These notes were written primarily for the training mode, but they should mostly apply for inference as well. For example, during inference Gradient Checkpointing is a no-op since it is only useful during training. Additionally, we found out that if you are doing a multi-GPU inference and not using [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/), [Accelerate](https://huggingface.co/blog/bloom-inference-pytorch-scripts) should provide a superior performance. Other quick related performance notes: - if you are training something from scratch always try to have tensors with shapes that are divisible by 16 (e.g. hidden size). For batch size try divisible by 2 at least. There are [wave and tile quanitization](https://developer.nvidia.com/blog/optimizing-gpu-performance-tensor-cores/) divisibility that is hardware-specific if you want to squeeze even higher performance from your GPUs. ### Activation Checkpointing or Gradient Checkpointing Activation checkpointing and gradient checkpointing are two distinct terms that refer to the same methodology. It's very confusing but this is how it is. Gradient checkpointing allows one to trade speed for GPU memory, which either allows one to overcome a GPU OOM, or increase their batch size, which often leads to a better performance. HF Transformers models don't know anything about DeepSpeed's activation checkpointing, so if you try to enable that feature in the DeepSpeed config file, nothing will happen. Therefore you have two ways to take advantage of this very beneficial feature: 1. If you want to use a HF Transformers models you can do `model.gradient_checkpointing_enable()` or use `--gradient_checkpointing` in the HF Trainer, which will automatically enable this for you. `torch.utils.checkpoint` is used there. 2. If you write your own model and you want to use DeepSpeed's activation checkpointing you can use the [API prescribed there](https://deepspeed.readthedocs.io/en/latest/activation-checkpointing.html). You can also take the HF Transformers modeling code and replace `torch.utils.checkpoint` with the DeepSpeed's API. The latter is more flexible since it allows you to offload the forward activations to the CPU memory instead of recalculating them. ### Optimizer and Scheduler As long as you don't enable `offload_optimizer` you can mix and match DeepSpeed and HuggingFace schedulers and optimizers, with the exception of using the combination of HuggingFace scheduler and DeepSpeed optimizer: | Combos | HF Scheduler | DS Scheduler | |:-------------|:-------------|:-------------| | HF Optimizer | Yes | Yes | | DS Optimizer | No | Yes | It is possible to use a non-DeepSpeed optimizer when `offload_optimizer` is enabled, as long as it has both CPU and GPU implementation (except LAMB). #### Optimizer DeepSpeed's main optimizers are Adam, AdamW, OneBitAdam, and Lamb. These have been thoroughly tested with ZeRO and are thus recommended to be used. It, however, can import other optimizers from `torch`. The full documentation is [here](https://www.deepspeed.ai/docs/config-json/#optimizer-parameters). If you don't configure the `optimizer` entry in the configuration file, the [`Trainer`] will automatically set it to `AdamW` and will use the supplied values or the defaults for the following command line arguments: `--learning_rate`, `--adam_beta1`, `--adam_beta2`, `--adam_epsilon` and `--weight_decay`. Here is an example of the auto-configured `optimizer` entry for `AdamW`: ```json { "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } } } Note that the command line arguments will set the values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when for example, the learning rate is set to different values in different places. Command line rules. The values that get overridden are: - `lr` with the value of `--learning_rate` - `betas` with the value of `--adam_beta1 --adam_beta2` - `eps` with the value of `--adam_epsilon` - `weight_decay` with the value of `--weight_decay` Therefore please remember to tune the shared hyperparameters on the command line. You can also set the values explicitly: ```json { "optimizer": { "type": "AdamW", "params": { "lr": 0.001, "betas": [0.8, 0.999], "eps": 1e-8, "weight_decay": 3e-7 } } } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. If you want to use another optimizer which is not listed above, you will have to add to the top level configuration. ```json { "zero_allow_untested_optimizer": true } Similarly to `AdamW`, you can configure other officially supported optimizers. Just remember that those may have different config values. e.g. for Adam you will want `weight_decay` around `0.01`. Additionally, offload works the best when it's used with Deepspeed's CPU Adam optimizer. If you want to use a different optimizer with offload, since `deepspeed==0.8.3` you need to also add: ```json { "zero_force_ds_cpu_optimizer": false } to the top level configuration. #### Scheduler DeepSpeed supports `LRRangeTest`, `OneCycle`, `WarmupLR` and `WarmupDecayLR` learning rate schedulers. The full documentation is [here](https://www.deepspeed.ai/docs/config-json/#scheduler-parameters). Here is where the schedulers overlap between 🤗 Transformers and DeepSpeed: - `WarmupLR` via `--lr_scheduler_type constant_with_warmup` - `WarmupDecayLR` via `--lr_scheduler_type linear`. This is also the default value for `--lr_scheduler_type`, therefore, if you don't configure the scheduler this is scheduler that will get configured by default. If you don't configure the `scheduler` entry in the configuration file, the [`Trainer`] will use the values of `--lr_scheduler_type`, `--learning_rate` and `--warmup_steps` or `--warmup_ratio` to configure a 🤗 Transformers version of it. Here is an example of the auto-configured `scheduler` entry for `WarmupLR`: ```json { "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } } } Since *"auto"* is used the [`Trainer`] arguments will set the correct values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when, for example, the learning rate is set to different values in different places. Command line rules. The values that get set are: - `warmup_min_lr` with the value of `0`. - `warmup_max_lr` with the value of `--learning_rate`. - `warmup_num_steps` with the value of `--warmup_steps` if provided. Otherwise will use `--warmup_ratio` multiplied by the number of training steps and rounded up. - `total_num_steps` with either the value of `--max_steps` or if it is not provided, derived automatically at run time based on the environment and the size of the dataset and other command line arguments (needed for `WarmupDecayLR`). You can, of course, take over any or all of the configuration values and set those yourself: ```json { "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 0.001, "warmup_num_steps": 1000 } } } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. For example, for `WarmupDecayLR`, you can use the following entry: ```json { "scheduler": { "type": "WarmupDecayLR", "params": { "last_batch_iteration": -1, "total_num_steps": "auto", "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } } } and `total_num_steps`, `warmup_max_lr`, `warmup_num_steps` and `total_num_steps` will be set at loading time. ### fp32 Precision Deepspeed supports the full fp32 and the fp16 mixed precision. Because of the much reduced memory needs and faster speed one gets with the fp16 mixed precision, the only time you will want to not use it is when the model you're using doesn't behave well under this training mode. Typically this happens when the model wasn't pretrained in the fp16 mixed precision (e.g. often this happens with bf16-pretrained models). Such models may overflow or underflow leading to `NaN` loss. If this is your case then you will want to use the full fp32 mode, by explicitly disabling the otherwise default fp16 mixed precision mode with: ```json { "fp16": { "enabled": false, } } If you're using the Ampere-architecture based GPU, pytorch version 1.7 and higher will automatically switch to using the much more efficient tf32 format for some operations, but the results will still be in fp32. For details and benchmarks, please, see [TensorFloat-32(TF32) on Ampere devices](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices). The document includes instructions on how to disable this automatic conversion if for some reason you prefer not to use it. With the 🤗 Trainer you can use `--tf32` to enable it, or disable it with `--tf32 0` or `--no_tf32`. By default the PyTorch default is used. ### Automatic Mixed Precision You can use automatic mixed precision with either a pytorch-like AMP way or the apex-like way: ### fp16 To configure pytorch AMP-like mode with fp16 (float16) set: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } } and the [`Trainer`] will automatically enable or disable it based on the value of `args.fp16_backend`. The rest of config values are up to you. This mode gets enabled when `--fp16 --fp16_backend amp` or `--fp16_full_eval` command line args are passed. You can also enable/disable this mode explicitly: ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. Here is the [documentation](https://www.deepspeed.ai/docs/config-json/#fp16-training-options). ### bf16 If bf16 (bfloat16) is desired instead of fp16 then the following configuration section is to be used: ```json { "bf16": { "enabled": "auto" } } bf16 has the same dynamic range as fp32 and thus doesn't require loss scaling. This mode gets enabled when `--bf16` or `--bf16_full_eval` command line args are passed. You can also enable/disable this mode explicitly: ```json { "bf16": { "enabled": true } } As of `deepspeed==0.6.0` the bf16 support is new and experimental. If you use [gradient accumulation](#gradient-accumulation) with bf16-enabled, you need to be aware that it'll accumulate gradients in bf16, which may not be what you want due to this format's low precision, as it may lead to a lossy accumulation. A work is being done to fix that and provide an option to use a higher precision `dtype` (fp16 or fp32). ### NCCL Collectives There is the `dtype` of the training regime and there is a separate `dtype` that is used for communication collectives like various reduction and gathering/scattering operations. All gather/scatter ops are performed in the same `dtype` the data is in, so if you're using bf16 training regime it gets gathered in bf16 - gathering is a non-lossy operation. Various reduce operations can be quite lossy, for example when gradients are averaged across multiple-gpus, if the communications are done in fp16 or bf16 the outcome is likely be lossy - since when one ads multiple numbers in low precision the result isn't exact. More so with bf16 as it has a lower precision than fp16. Often fp16 is good enough as the loss is minimal when averaging grads which are typically very small. Therefore, by default for half precision training fp16 is used as the default for reduction operations. But you have full control over this functionality and if you choose you can add a small overhead and ensure that reductions will be using fp32 as the accumulation dtype and only when the result is ready it'll get downcast to the half precision `dtype` you're training in. In order to override the default you simply add a new configuration entry: ```json { "communication_data_type": "fp32" } The valid values as of this writing are "fp16", "bfp16", "fp32". note: stage zero 3 had a bug with regards to bf16 comm dtype that was fixed in `deepspeed==0.8.1` ### apex To configure apex AMP-like mode set: ```json "amp": { "enabled": "auto", "opt_level": "auto" } and the [`Trainer`] will automatically configure it based on the values of `args.fp16_backend` and `args.fp16_opt_level`. This mode gets enabled when `--fp16 --fp16_backend apex --fp16_opt_level 01` command line args are passed. You can also configure this mode explicitly: ```json { "amp": { "enabled": true, "opt_level": "O1" } } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. Here is the [documentation](https://www.deepspeed.ai/docs/config-json/#automatic-mixed-precision-amp-training-options). ### Batch Size To configure batch size, use: ```json { "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto" } and the [`Trainer`] will automatically set `train_micro_batch_size_per_gpu` to the value of `args.per_device_train_batch_size` and `train_batch_size` to `args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps`. You can also set the values explicitly: ```json { "train_batch_size": 12, "train_micro_batch_size_per_gpu": 4 } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. ### Gradient Accumulation To configure gradient accumulation set: ```json { "gradient_accumulation_steps": "auto" } and the [`Trainer`] will automatically set it to the value of `args.gradient_accumulation_steps`. You can also set the value explicitly: ```json { "gradient_accumulation_steps": 3 } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. ### Gradient Clipping To configure gradient gradient clipping set: ```json { "gradient_clipping": "auto" } and the [`Trainer`] will automatically set it to the value of `args.max_grad_norm`. You can also set the value explicitly: ```json { "gradient_clipping": 1.0 } But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. ### Getting The Model Weights Out As long as you continue training and resuming using DeepSpeed you don't need to worry about anything. DeepSpeed stores fp32 master weights in its custom checkpoint optimizer files, which are `global_step*/*optim_states.pt` (this is glob pattern), and are saved under the normal checkpoint. **FP16 Weights:** When a model is saved under ZeRO-2, you end up having the normal `pytorch_model.bin` file with the model weights, but they are only the fp16 version of the weights. Under ZeRO-3, things are much more complicated, since the model weights are partitioned out over multiple GPUs, therefore `"stage3_gather_16bit_weights_on_model_save": true` is required to get the `Trainer` to save the fp16 version of the weights. If this setting is `False` `pytorch_model.bin` won't be created. This is because by default DeepSpeed's `state_dict` contains a placeholder and not the real weights. If we were to save this `state_dict` it won't be possible to load it back. ```json { "zero_optimization": { "stage3_gather_16bit_weights_on_model_save": true } } **FP32 Weights:** While the fp16 weights are fine for resuming training, if you finished finetuning your model and want to upload it to the [models hub](https://huggingface.co/models) or pass it to someone else you most likely will want to get the fp32 weights. This ideally shouldn't be done during training since this is a process that requires a lot of memory, and therefore best to be performed offline after the training is complete. But if desired and you have plenty of free CPU memory it can be done in the same training script. The following sections will discuss both approaches. **Live FP32 Weights Recovery:** This approach may not work if you model is large and you have little free CPU memory left, at the end of the training. If you have saved at least one checkpoint, and you want to use the latest one, you can do the following: thon from transformers.trainer_utils import get_last_checkpoint from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint checkpoint_dir = get_last_checkpoint(trainer.args.output_dir) fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir) If you're using the `--load_best_model_at_end` class:*~transformers.TrainingArguments* argument (to track the best checkpoint), then you can finish the training by first saving the final model explicitly and then do the same as above: thon from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint checkpoint_dir = os.path.join(trainer.args.output_dir, "checkpoint-final") trainer.deepspeed.save_checkpoint(checkpoint_dir) fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir) Note, that once `load_state_dict_from_zero_checkpoint` was run, the `model` will no longer be usable in the DeepSpeed context of the same application. i.e. you will need to re-initialize the deepspeed engine, since `model.load_state_dict(state_dict)` will remove all the DeepSpeed magic from it. So do this only at the very end of the training. Of course, you don't have to use class:*~transformers.Trainer* and you can adjust the examples above to your own trainer. If for some reason you want more refinement, you can also extract the fp32 `state_dict` of the weights and apply these yourself as is shown in the following example: thon from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu model = model.cpu() model.load_state_dict(state_dict) **Offline FP32 Weights Recovery:** DeepSpeed creates a special conversion script `zero_to_fp32.py` which it places in the top-level of the checkpoint folder. Using this script you can extract the weights at any point. The script is standalone and you no longer need to have the configuration file or a `Trainer` to do the extraction. Let's say your checkpoint folder looks like this: ```bash $ ls -l output_dir/checkpoint-1/ -rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/ -rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest -rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt -rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin -rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt -rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json -rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model -rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json -rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json -rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin -rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py* In this example there is just one DeepSpeed checkpoint sub-folder *global_step1*. Therefore to reconstruct the fp32 weights just run: ```bash python zero_to_fp32.py . pytorch_model.bin This is it. `pytorch_model.bin` will now contain the full fp32 model weights consolidated from multiple GPUs. The script will automatically be able to handle either a ZeRO-2 or ZeRO-3 checkpoint. `python zero_to_fp32.py -h` will give you usage details. The script will auto-discover the deepspeed sub-folder using the contents of the file `latest`, which in the current example will contain `global_step1`. Note: currently the script requires 2x general RAM of the final fp32 model weights. ### ZeRO-3 and Infinity Nuances ZeRO-3 is quite different from ZeRO-2 because of its param sharding feature. ZeRO-Infinity further extends ZeRO-3 to support NVMe memory and multiple other speed and scalability improvements. While all the efforts were made for things to just work without needing any special changes to your models, in certain circumstances you may find the following information to be needed. #### Constructing Massive Models DeepSpeed/ZeRO-3 can handle models with Trillions of parameters which may not fit onto the existing RAM. In such cases, but also if you want the initialization to happen much faster, initialize the model using *deepspeed.zero.Init()* context manager (which is also a function decorator), like so: thon from transformers import T5ForConditionalGeneration, T5Config import deepspeed with deepspeed.zero.Init(): config = T5Config.from_pretrained("t5-small") model = T5ForConditionalGeneration(config) As you can see this gives you a randomly initialized model. If you want to use a pretrained model, `model_class.from_pretrained` will activate this feature as long as `is_deepspeed_zero3_enabled()` returns `True`, which currently is setup by the [`TrainingArguments`] object if the passed DeepSpeed configuration file contains ZeRO-3 config section. Thus you must create the [`TrainingArguments`] object **before** calling `from_pretrained`. Here is an example of a possible sequence: thon from transformers import AutoModel, Trainer, TrainingArguments training_args = TrainingArguments(, deepspeed=ds_config) model = AutoModel.from_pretrained("t5-small") trainer = Trainer(model=model, args=training_args, ) If you're using the official example scripts and your command line arguments include `--deepspeed ds_config.json` with ZeRO-3 config enabled, then everything is already done for you, since this is how example scripts are written. Note: If the fp16 weights of the model can't fit onto the memory of a single GPU this feature must be used. For full details on this method and other related features please refer to [Constructing Massive Models](https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models). Also when loading fp16-pretrained models, you will want to tell `from_pretrained` to use `torch_dtype=torch.float16`. For details, please, see [from_pretrained-torch-dtype](#from_pretrained-torch-dtype). #### Gathering Parameters Under ZeRO-3 on multiple GPUs no single GPU has all the parameters unless it's the parameters for the currently executing layer. So if you need to access all parameters from all layers at once there is a specific method to do it. Most likely you won't need it, but if you do please refer to [Gathering Parameters](https://deepspeed.readthedocs.io/en/latest/zero3.html#manual-parameter-coordination) We do however use it internally in several places, one such example is when loading pretrained model weights in `from_pretrained`. We load one layer at a time and immediately partition it to all participating GPUs, as for very large models it won't be possible to load it on one GPU and then spread it out to multiple GPUs, due to memory limitations. Also under ZeRO-3, if you write your own code and run into a model parameter weight that looks like: thon tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True) stress on `tensor([1.])`, or if you get an error where it says the parameter is of size `1`, instead of some much larger multi-dimensional shape, this means that the parameter is partitioned and what you see is a ZeRO-3 placeholder. ### ZeRO Inference ZeRO Inference uses the same config as ZeRO-3 Training. You just don't need the optimizer and scheduler sections. In fact you can leave these in the config file if you want to share the same one with the training. They will just be ignored. Otherwise you just need to pass the usual [`TrainingArguments`] arguments. For example: ```bash deepspeed --num_gpus=2 your_program.py --do_eval --deepspeed ds_config.json The only important thing is that you need to use a ZeRO-3 configuration, since ZeRO-2 provides no benefit whatsoever for the inference as only ZeRO-3 performs sharding of parameters, whereas ZeRO-1 shards gradients and optimizer states. Here is an example of running `run_translation.py` under DeepSpeed deploying all available GPUs: ```bash deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path t5-small --output_dir output_dir \ --do_eval --max_eval_samples 50 --warmup_steps 50 \ --max_source_length 128 --val_max_target_length 128 \ --overwrite_output_dir --per_device_eval_batch_size 4 \ --predict_with_generate --dataset_config "ro-en" --fp16 \ --source_lang en --target_lang ro --dataset_name wmt16 \ --source_prefix "translate English to Romanian: " Since for inference there is no need for additional large memory used by the optimizer states and the gradients you should be able to fit much larger batches and/or sequence length onto the same hardware. Additionally DeepSpeed is currently developing a related product called Deepspeed-Inference which has no relationship to the ZeRO technology, but instead uses tensor parallelism to scale models that can't fit onto a single GPU. This is a work in progress and we will provide the integration once that product is complete. ### Memory Requirements Since Deepspeed ZeRO can offload memory to CPU (and NVMe) the framework provides utils that allow one to tell how much CPU and GPU memory will be needed depending on the number of GPUs being used. Let's estimate how much memory is needed to finetune "bigscience/T0_3B" on a single GPU: ```bash $ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained("bigscience/T0_3B"); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)' [] Estimated memory needed for params, optim states and gradients for a: HW: Setup with 1 node, 1 GPU per node. SW: Model with 2783M total params, 65M largest layer params. per CPU | per GPU | Options 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0 62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1 62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0 0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1 15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0 So you can fit it on a single 80GB GPU and no CPU offload, or a tiny 8GB GPU but then need ~60GB of CPU memory. (Remember this is just the memory for params, optimizer states and gradients - you will need a bit more memory for cuda kernels, activations and temps.) Then it's a tradeoff of cost vs speed. It'll be cheaper to buy/rent a smaller GPU (or less GPUs since you can use multiple GPUs with Deepspeed ZeRO. But then it'll be slower, so even if you don't care about how fast something will be done, the slowdown has a direct impact on the duration of using the GPU and thus bigger cost. So experiment and compare which works the best. If you have enough GPU memory make sure to disable the CPU/NVMe offload as it'll make everything faster. For example, let's repeat the same for 2 GPUs: ```bash $ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained("bigscience/T0_3B"); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1)' [] Estimated memory needed for params, optim states and gradients for a: HW: Setup with 1 node, 2 GPUs per node. SW: Model with 2783M total params, 65M largest layer params. per CPU | per GPU | Options 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0 62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=1 62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=0 0.74GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=1 31.11GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=0 So here you'd want 2x 32GB GPUs or higher without offloading to CPU. For full information please see [memory estimators](https://deepspeed.readthedocs.io/en/latest/memory.html). ### Filing Issues Here is how to file an issue so that we could quickly get to the bottom of the issue and help you to unblock your work. In your report please always include: 1. the full Deepspeed config file in the report 2. either the command line arguments if you were using the [`Trainer`] or [`TrainingArguments`] arguments if you were scripting the Trainer setup yourself. Please do not dump the [`TrainingArguments`] as it has dozens of entries that are irrelevant. 3. Output of: ```bash python -c 'import torch; print(f"torch: {torch.__version__}")' python -c 'import transformers; print(f"transformers: {transformers.__version__}")' python -c 'import deepspeed; print(f"deepspeed: {deepspeed.__version__}")' 4. If possible include a link to a Google Colab notebook that we can reproduce the problem with. You can use this [notebook](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb) as a starting point. 5. Unless it's impossible please always use a standard dataset that we can use and not something custom. 6. If possible try to use one of the existing [examples](https://github.com/huggingface/transformers/tree/main/examples/pytorch) to reproduce the problem with. Things to consider: - Deepspeed is often not the cause of the problem. Some of the filed issues proved to be Deepspeed-unrelated. That is once Deepspeed was removed from the setup, the problem was still there. Therefore, if it's not absolutely obvious it's a DeepSpeed-related problem, as in you can see that there is an exception and you can see that DeepSpeed modules are involved, first re-test your setup without DeepSpeed in it. And only if the problem persists then do mentioned Deepspeed and supply all the required details. - If it's clear to you that the issue is in the DeepSpeed core and not the integration part, please file the Issue directly with [Deepspeed](https://github.com/microsoft/DeepSpeed/). If you aren't sure, please do not worry, either Issue tracker will do, we will figure it out once you posted it and redirect you to another Issue tracker if need be. ### Troubleshooting #### the `deepspeed` process gets killed at startup without a traceback If the `deepspeed` process gets killed at launch time without a traceback, that usually means that the program tried to allocate more CPU memory than your system has or your process is allowed to allocate and the OS kernel killed that process. This is because your configuration file most likely has either `offload_optimizer` or `offload_param` or both configured to offload to `cpu`. If you have NVMe, experiment with offloading to NVMe if you're running under ZeRO-3. Here is how you can [estimate how much memory is needed for a specific model](https://deepspeed.readthedocs.io/en/latest/memory.html). #### training and/or eval/predict loss is `NaN` This often happens when one takes a model pre-trained in bf16 mixed precision mode and tries to use it under fp16 (with or without mixed precision). Most models trained on TPU and often the ones released by Google are in this category (e.g. almost all t5-based models). Here the solution is to either use fp32 or bf16 if your hardware supports it (TPU, Ampere GPUs or newer). The other problem may have to do with using fp16. When you configure this section: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } } and you see in your log that Deepspeed reports `OVERFLOW!` as follows: 0%| | 0/189 [00:00, ?it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144 1%|▌ | 1/189 [00:00<01:26, 2.17it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0 1%|█▏ [] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 14%|████████████████▌ | 27/189 [00:14<01:13, 2.21it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 15%|█████████████████▏ | 28/189 [00:14<01:13, 2.18it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 15%|█████████████████▊ | 29/189 [00:15<01:13, 2.18it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 [] that means that the Deepspeed loss scaler can't figure out a scaling co-efficient that overcomes loss overflow. (the log was massaged to be more readable here.) In this case you usually need to raise the value of `initial_scale_power`. Setting it to `"initial_scale_power": 32` will typically resolve the problem. ### Notes - DeepSpeed works with the PyTorch [`Trainer`] but not TF [`TFTrainer`]. - While DeepSpeed has a pip installable PyPI package, it is highly recommended that it gets installed from [source](https://github.com/microsoft/deepspeed#installation) to best match your hardware and also if you need to enable certain features, like 1-bit Adam, which aren't available in the pypi distribution. - You don't have to use the [`Trainer`] to use DeepSpeed with 🤗 Transformers - you can use any model with your own trainer, and you will have to adapt the latter according to [the DeepSpeed integration instructions](https://www.deepspeed.ai/getting-started/#writing-deepspeed-models). ## Non-Trainer Deepspeed Integration The [`~integrations.HfDeepSpeedConfig`] is used to integrate Deepspeed into the 🤗 Transformers core functionality, when [`Trainer`] is not used. The only thing that it does is handling Deepspeed ZeRO-3 param gathering and automatically splitting the model onto multiple gpus during `from_pretrained` call. Everything else you have to do by yourself. When using [`Trainer`] everything is automatically taken care of. When not using [`Trainer`], to efficiently deploy DeepSpeed ZeRO-3, you must instantiate the [`~integrations.HfDeepSpeedConfig`] object before instantiating the model and keep that object alive. If you're using Deepspeed ZeRO-1 or ZeRO-2 you don't need to use `HfDeepSpeedConfig` at all. For example for a pretrained model: thon from transformers.integrations import HfDeepSpeedConfig from transformers import AutoModel import deepspeed ds_config = {} # deepspeed config object or path to the file # must run before instantiating the model to detect zero 3 dschf = HfDeepSpeedConfig(ds_config) # keep this object alive model = AutoModel.from_pretrained("gpt2") engine = deepspeed.initialize(model=model, config_params=ds_config, ) or for non-pretrained model: thon from transformers.integrations import HfDeepSpeedConfig from transformers import AutoModel, AutoConfig import deepspeed ds_config = {} # deepspeed config object or path to the file # must run before instantiating the model to detect zero 3 dschf = HfDeepSpeedConfig(ds_config) # keep this object alive config = AutoConfig.from_pretrained("gpt2") model = AutoModel.from_config(config) engine = deepspeed.initialize(model=model, config_params=ds_config, ) Please note that if you're not using the [`Trainer`] integration, you're completely on your own. Basically follow the documentation on the [Deepspeed](https://www.deepspeed.ai/) website. Also you have to configure explicitly the config file - you can't use `"auto"` values and you will have to put real values instead. ## HfDeepSpeedConfig [[autodoc]] integrations.HfDeepSpeedConfig - all ### Custom DeepSpeed ZeRO Inference Here is an example of how one could do DeepSpeed ZeRO Inference without using [`Trainer`] when one can't fit a model onto a single GPU. The solution includes using additional GPUs or/and offloading GPU memory to CPU memory. The important nuance to understand here is that the way ZeRO is designed you can process different inputs on different GPUs in parallel. The example has copious notes and is self-documenting. Make sure to: 1. disable CPU offload if you have enough GPU memory (since it slows things down) 2. enable bf16 if you own an Ampere or a newer GPU to make things faster. If you don't have that hardware you may enable fp16 as long as you don't use any model that was pre-trained in bf16 mixed precision (such as most t5 models). These usually overflow in fp16 and you will see garbage as output. thon #!/usr/bin/env python # This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model # into a single GPU # # 1. Use 1 GPU with CPU offload # 2. Or use multiple GPUs instead # # First you need to install deepspeed: pip install deepspeed # # Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2 # small GPUs can handle it. or 1 small GPU and a lot of CPU memory. # # To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU - # you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to # process multiple inputs at once. # # The provided deepspeed config also activates CPU memory offloading, so chances are that if you # have a lot of available CPU memory and you don't mind a slowdown you should be able to load a # model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will # run faster if you don't want offload to CPU - so disable that section then. # # To deploy on 1 gpu: # # deepspeed --num_gpus 1 t0.py # or: # python -m torch.distributed.run --nproc_per_node=1 t0.py # # To deploy on 2 gpus: # # deepspeed --num_gpus 2 t0.py # or: # python -m torch.distributed.run --nproc_per_node=2 t0.py from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM from transformers.integrations import HfDeepSpeedConfig import deepspeed import os import torch os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers # distributed setup local_rank = int(os.getenv("LOCAL_RANK", "0")) world_size = int(os.getenv("WORLD_SIZE", "1")) torch.cuda.set_device(local_rank) deepspeed.init_distributed() model_name = "bigscience/T0_3B" config = AutoConfig.from_pretrained(model_name) model_hidden_size = config.d_model # batch size has to be divisible by world_size, but can be bigger than world_size train_batch_size = 1 * world_size # ds_config notes # # - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be # faster. # # - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g. # all official t5 models are bf16-pretrained # # - set offload_param.device to "none" or completely remove the `offload_param` section if you don't # - want CPU offload # # - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control # - which params should remain on gpus - the larger the value the smaller the offload size # # For indepth info on Deepspeed config see # https://huggingface.co/docs/transformers/main/main_classes/deepspeed # keeping the same format as json for consistency, except it uses lower case for true/false # fmt: off ds_config = { "fp16": { "enabled": False }, "bf16": { "enabled": False }, "zero_optimization": { "stage": 3, "offload_param": { "device": "cpu", "pin_memory": True }, "overlap_comm": True, "contiguous_gradients": True, "reduce_bucket_size": model_hidden_size * model_hidden_size, "stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size, "stage3_param_persistence_threshold": 10 * model_hidden_size }, "steps_per_print": 2000, "train_batch_size": train_batch_size, "train_micro_batch_size_per_gpu": 1, "wall_clock_breakdown": False } # fmt: on # next line instructs transformers to partition the model directly over multiple gpus using # deepspeed.zero.Init when model's `from_pretrained` method is called. # # **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)** # # otherwise the model will first be loaded normally and only partitioned at forward time which is # less efficient and when there is little CPU RAM may fail dschf = HfDeepSpeedConfig(ds_config) # keep this object alive # now a model can be loaded. model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # initialise Deepspeed ZeRO and store only the engine object ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0] ds_engine.module.eval() # inference # Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once. # If you use more GPUs adjust for more. # And of course if you have just one input to process you then need to pass the same string to both gpus # If you use only one GPU, then you will have only rank 0. rank = torch.distributed.get_rank() if rank == 0: text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy" elif rank == 1: text_in = "Is this review positive or negative? Review: this is the worst restaurant ever" tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank) with torch.no_grad(): outputs = ds_engine.module.generate(inputs, synced_gpus=True) text_out = tokenizer.decode(outputs[0], skip_special_tokens=True) print(f"rank{rank}:\n in={text_in}\n out={text_out}") Let's save it as `t0.py` and run it: $ deepspeed --num_gpus 2 t0.py rank0: in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy out=Positive rank1: in=Is this review positive or negative? Review: this is the worst restaurant ever out=negative This was a very basic example and you will want to adapt it to your needs. ### `generate` nuances When using multiple GPUs with ZeRO Stage-3, one has to synchronize the GPUs by calling `generate(, synced_gpus=True)`. If this is not done if one GPU finished generating before other GPUs the whole system will hang as the rest of the GPUs will not be able to received the shard of weights from the GPU that stopped generating. Starting from `transformers=4.28`, if `synced_gpus` isn't explicitly specified, it'll be set to `True` automatically if these conditions are detected. But you can still override the value of `synced_gpus` if need to. ## Testing Deepspeed Integration If you submit a PR that involves DeepSpeed integration please note our CircleCI PR CI setup has no GPUs, so we only run tests requiring gpus on a different CI nightly. Therefore if you get a green CI report in your PR it doesn't mean DeepSpeed tests pass. To run DeepSpeed tests, please run at least: RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py If you changed any of the modeling or pytorch examples code, then run the model zoo tests as well. The following will run all DeepSpeed tests: RUN_SLOW=1 pytest tests/deepspeed ## Main DeepSpeed Resources - [Project's github](https://github.com/microsoft/deepspeed) - [Usage docs](https://www.deepspeed.ai/getting-started/) - [API docs](https://deepspeed.readthedocs.io/en/latest/index.html) - [Blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed) Papers: - [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054) - [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840) - [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857) Finally, please, remember that, HuggingFace [`Trainer`] only integrates DeepSpeed, therefore if you have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).
main_classes/data_collator.md
# Data Collator Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of the same type as the elements of `train_dataset` or `eval_dataset`. To be able to build batches, data collators may apply some processing (like padding). Some of them (like [`DataCollatorForLanguageModeling`]) also apply some random data augmentation (like random masking) on the formed batch. Examples of use can be found in the [example scripts](../examples) or [example notebooks](../notebooks). ## Default data collator [[autodoc]] data.data_collator.default_data_collator ## DefaultDataCollator [[autodoc]] data.data_collator.DefaultDataCollator ## DataCollatorWithPadding [[autodoc]] data.data_collator.DataCollatorWithPadding ## DataCollatorForTokenClassification [[autodoc]] data.data_collator.DataCollatorForTokenClassification ## DataCollatorForSeq2Seq [[autodoc]] data.data_collator.DataCollatorForSeq2Seq ## DataCollatorForLanguageModeling [[autodoc]] data.data_collator.DataCollatorForLanguageModeling - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens ## DataCollatorForWholeWordMask [[autodoc]] data.data_collator.DataCollatorForWholeWordMask - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens ## DataCollatorForPermutationLanguageModeling [[autodoc]] data.data_collator.DataCollatorForPermutationLanguageModeling - numpy_mask_tokens - tf_mask_tokens - torch_mask_tokens
main_classes/model.md
# Models The base classes [`PreTrainedModel`], [`TFPreTrainedModel`], and [`FlaxPreTrainedModel`] implement the common methods for loading/saving a model either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS S3 repository). [`PreTrainedModel`] and [`TFPreTrainedModel`] also implement a few methods which are common among all the models to: - resize the input token embeddings when new tokens are added to the vocabulary - prune the attention heads of the model. The other methods that are common to each model are defined in [`~modeling_utils.ModuleUtilsMixin`] (for the PyTorch models) and [`~modeling_tf_utils.TFModuleUtilsMixin`] (for the TensorFlow models) or for text generation, [`~generation.GenerationMixin`] (for the PyTorch models), [`~generation.TFGenerationMixin`] (for the TensorFlow models) and [`~generation.FlaxGenerationMixin`] (for the Flax/JAX models). ## PreTrainedModel [[autodoc]] PreTrainedModel - push_to_hub - all ### Large model loading In Transformers 4.20.0, the [`~PreTrainedModel.from_pretrained`] method has been reworked to accommodate large models using [Accelerate](https://huggingface.co/docs/accelerate/big_modeling). This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded. This option can be activated with `low_cpu_mem_usage=True`. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only. from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True) Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With `device_map="auto"`, Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect. When passing a `device_map`, `low_cpu_mem_usage` is automatically set to `True`, so you don't need to specify it: from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto") You can inspect how the model was split across devices by looking at its `hf_device_map` attribute: t0pp.hf_device_map thon out {'shared': 0, 'decoder.embed_tokens': 0, 'encoder': 0, 'decoder.block.0': 0, 'decoder.block.1': 1, 'decoder.block.2': 1, 'decoder.block.3': 1, 'decoder.block.4': 1, 'decoder.block.5': 1, 'decoder.block.6': 1, 'decoder.block.7': 1, 'decoder.block.8': 1, 'decoder.block.9': 1, 'decoder.block.10': 1, 'decoder.block.11': 1, 'decoder.block.12': 1, 'decoder.block.13': 1, 'decoder.block.14': 1, 'decoder.block.15': 1, 'decoder.block.16': 1, 'decoder.block.17': 1, 'decoder.block.18': 1, 'decoder.block.19': 1, 'decoder.block.20': 1, 'decoder.block.21': 1, 'decoder.block.22': 'cpu', 'decoder.block.23': 'cpu', 'decoder.final_layer_norm': 'cpu', 'decoder.dropout': 'cpu', 'lm_head': 'cpu'} You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory): thon device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1} Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like `torch.float16`) or use direct quantization techniques as described below. ### Model Instantiation dtype Under Pytorch a model normally gets instantiated with `torch.float32` format. This can be an issue if one tries to load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can either explicitly pass the desired `dtype` using `torch_dtype` argument: thon model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16) or, if you want the model to always load in the most optimal memory pattern, you can use the special value `"auto"`, and then `dtype` will be automatically derived from the model's weights: thon model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto") Models instantiated from scratch can also be told which `dtype` to use with: thon config = T5Config.from_pretrained("t5") model = AutoModel.from_config(config) Due to Pytorch design, this functionality is only available for floating dtypes. ## ModuleUtilsMixin [[autodoc]] modeling_utils.ModuleUtilsMixin ## TFPreTrainedModel [[autodoc]] TFPreTrainedModel - push_to_hub - all ## TFModelUtilsMixin [[autodoc]] modeling_tf_utils.TFModelUtilsMixin ## FlaxPreTrainedModel [[autodoc]] FlaxPreTrainedModel - push_to_hub - all ## Pushing to the Hub [[autodoc]] utils.PushToHubMixin ## Sharded checkpoints [[autodoc]] modeling_utils.load_sharded_checkpoint
main_classes/processors.md
# Processors Processors can mean two different things in the Transformers library: - the objects that pre-process inputs for multi-modal models such as [Wav2Vec2](../model_doc/wav2vec2) (speech and text) or [CLIP](../model_doc/clip) (text and vision) - deprecated objects that were used in older versions of the library to preprocess data for GLUE or SQUAD. ## Multi-modal processors Any multi-modal model will require an object to encode or decode the data that groups several modalities (among text, vision and audio). This is handled by objects called processors, which group together two or more processing objects such as tokenizers (for the text modality), image processors (for vision) and feature extractors (for audio). Those processors inherit from the following base class that implements the saving and loading functionality: [[autodoc]] ProcessorMixin ## Deprecated processors All processors follow the same architecture which is that of the [`~data.processors.utils.DataProcessor`]. The processor returns a list of [`~data.processors.utils.InputExample`]. These [`~data.processors.utils.InputExample`] can be converted to [`~data.processors.utils.InputFeatures`] in order to be fed to the model. [[autodoc]] data.processors.utils.DataProcessor [[autodoc]] data.processors.utils.InputExample [[autodoc]] data.processors.utils.InputFeatures ## GLUE [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com/) is a benchmark that evaluates the performance of models across a diverse set of existing NLU tasks. It was released together with the paper [GLUE: A multi-task benchmark and analysis platform for natural language understanding](https://openreview.net/pdf?id=rJ4km2R5t7) This library hosts a total of 10 processors for the following tasks: MRPC, MNLI, MNLI (mismatched), CoLA, SST2, STSB, QQP, QNLI, RTE and WNLI. Those processors are: - [`~data.processors.utils.MrpcProcessor`] - [`~data.processors.utils.MnliProcessor`] - [`~data.processors.utils.MnliMismatchedProcessor`] - [`~data.processors.utils.Sst2Processor`] - [`~data.processors.utils.StsbProcessor`] - [`~data.processors.utils.QqpProcessor`] - [`~data.processors.utils.QnliProcessor`] - [`~data.processors.utils.RteProcessor`] - [`~data.processors.utils.WnliProcessor`] Additionally, the following method can be used to load values from a data file and convert them to a list of [`~data.processors.utils.InputExample`]. [[autodoc]] data.processors.glue.glue_convert_examples_to_features ## XNLI [The Cross-Lingual NLI Corpus (XNLI)](https://www.nyu.edu/projects/bowman/xnli/) is a benchmark that evaluates the quality of cross-lingual text representations. XNLI is crowd-sourced dataset based on [*MultiNLI*](http://www.nyu.edu/projects/bowman/multinli/): pairs of text are labeled with textual entailment annotations for 15 different languages (including both high-resource language such as English and low-resource languages such as Swahili). It was released together with the paper [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053) This library hosts the processor to load the XNLI data: - [`~data.processors.utils.XnliProcessor`] Please note that since the gold labels are available on the test set, evaluation is performed on the test set. An example using these processors is given in the [run_xnli.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_xnli.py) script. ## SQuAD [The Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer//) is a benchmark that evaluates the performance of models on question answering. Two versions are available, v1.1 and v2.0. The first version (v1.1) was released together with the paper [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250). The second version (v2.0) was released alongside the paper [Know What You Don't Know: Unanswerable Questions for SQuAD](https://arxiv.org/abs/1806.03822). This library hosts a processor for each of the two versions: ### Processors Those processors are: - [`~data.processors.utils.SquadV1Processor`] - [`~data.processors.utils.SquadV2Processor`] They both inherit from the abstract class [`~data.processors.utils.SquadProcessor`] [[autodoc]] data.processors.squad.SquadProcessor - all Additionally, the following method can be used to convert SQuAD examples into [`~data.processors.utils.SquadFeatures`] that can be used as model inputs. [[autodoc]] data.processors.squad.squad_convert_examples_to_features These processors as well as the aforementioned method can be used with files containing the data as well as with the *tensorflow_datasets* package. Examples are given below. ### Example usage Here is an example using the processors as well as the conversion method using data files: thon # Loading a V2 processor processor = SquadV2Processor() examples = processor.get_dev_examples(squad_v2_data_dir) # Loading a V1 processor processor = SquadV1Processor() examples = processor.get_dev_examples(squad_v1_data_dir) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) Using *tensorflow_datasets* is as easy as using a data file: thon # tensorflow_datasets only handle Squad V1. tfds_examples = tfds.load("squad") examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) Another example using these processors is given in the [run_squad.py](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering/run_squad.py) script.
main_classes/tokenizer.md
# Tokenizer A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the Rust library [🤗 Tokenizers](https://github.com/huggingface/tokenizers). The "Fast" implementations allows: 1. a significant speed-up in particular when doing batched tokenization and 2. additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token). The base classes [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and "Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace's AWS S3 repository). They both rely on [`~tokenization_utils_base.PreTrainedTokenizerBase`] that contains the common methods, and [`~tokenization_utils_base.SpecialTokensMixin`]. [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] thus implement the main methods for using all the tokenizers: - Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e., tokenizing and converting to integers). - Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece). - Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization. [`BatchEncoding`] holds the output of the [`~tokenization_utils_base.PreTrainedTokenizerBase`]'s encoding methods (`__call__`, `encode_plus` and `batch_encode_plus`) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by these methods (`input_ids`, `attention_mask`). When the tokenizer is a "Fast" tokenizer (i.e., backed by HuggingFace [tokenizers library](https://github.com/huggingface/tokenizers)), this class provides in addition several advanced alignment methods which can be used to map between the original string (character and words) and the token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding to a given token). ## PreTrainedTokenizer [[autodoc]] PreTrainedTokenizer - __call__ - add_tokens - add_special_tokens - apply_chat_template - batch_decode - decode - encode - push_to_hub - all ## PreTrainedTokenizerFast The [`PreTrainedTokenizerFast`] depend on the [tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the 🤗 tokenizers library can be loaded very simply into 🤗 transformers. Take a look at the [Using tokenizers from 🤗 tokenizers](../fast_tokenizers) page to understand how this is done. [[autodoc]] PreTrainedTokenizerFast - __call__ - add_tokens - add_special_tokens - apply_chat_template - batch_decode - decode - encode - push_to_hub - all ## BatchEncoding [[autodoc]] BatchEncoding
main_classes/trainer.md
# Trainer The [`Trainer`] class provides an API for feature-complete training in PyTorch for most standard use cases. It's used in most of the [example scripts](https://github.com/huggingface/transformers/tree/main/examples). If you're looking to fine-tune a language model like Llama-2 or Mistral on a text dataset using autoregressive techniques, consider using [`trl`](https://github.com/huggingface/trl)'s [`~trl.SFTTrainer`]. The [`~trl.SFTTrainer`] wraps the [`Trainer`] and is specially optimized for this particular task and supports sequence packing, LoRA, quantization, and DeepSpeed for efficient scaling to any model size. On the other hand, the [`Trainer`] is a more versatile option, suitable for a broader spectrum of tasks. Before instantiating your [`Trainer`], create a [`TrainingArguments`] to access all the points of customization during training. The API supports distributed training on multiple GPUs/TPUs, mixed precision through [NVIDIA Apex](https://github.com/NVIDIA/apex) and Native AMP for PyTorch. The [`Trainer`] contains the basic training loop which supports the above features. To inject custom behavior you can subclass them and override the following methods: - **get_train_dataloader** -- Creates the training DataLoader. - **get_eval_dataloader** -- Creates the evaluation DataLoader. - **get_test_dataloader** -- Creates the test DataLoader. - **log** -- Logs information on the various objects watching training. - **create_optimizer_and_scheduler** -- Sets up the optimizer and learning rate scheduler if they were not passed at init. Note, that you can also subclass or override the `create_optimizer` and `create_scheduler` methods separately. - **create_optimizer** -- Sets up the optimizer if it wasn't passed at init. - **create_scheduler** -- Sets up the learning rate scheduler if it wasn't passed at init. - **compute_loss** - Computes the loss on a batch of training inputs. - **training_step** -- Performs a training step. - **prediction_step** -- Performs an evaluation/test step. - **evaluate** -- Runs an evaluation loop and returns metrics. - **predict** -- Returns predictions (with metrics if labels are available) on a test set. The [`Trainer`] class is optimized for 🤗 Transformers models and can have surprising behaviors when you use it on other models. When using it on your own model, make sure: - your model always return tuples or subclasses of [`~utils.ModelOutput`]. - your model can compute the loss if a `labels` argument is provided and that loss is returned as the first element of the tuple (if your model returns tuples) - your model can accept multiple label arguments (use the `label_names` in your [`TrainingArguments`] to indicate their name to the [`Trainer`]) but none of them should be named `"label"`. Here is an example of how to customize [`Trainer`] to use a weighted loss (useful when you have an unbalanced training set): thon from torch import nn from transformers import Trainer class CustomTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): labels = inputs.pop("labels") # forward pass outputs = model(**inputs) logits = outputs.get("logits") # compute custom loss (suppose one has 3 labels with different weights) loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0], device=model.device)) loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1)) return (loss, outputs) if return_outputs else loss Another way to customize the training loop behavior for the PyTorch [`Trainer`] is to use [callbacks](callback) that can inspect the training loop state (for progress reporting, logging on TensorBoard or other ML platforms) and take decisions (like early stopping). ## Trainer [[autodoc]] Trainer - all ## Seq2SeqTrainer [[autodoc]] Seq2SeqTrainer - evaluate - predict ## TrainingArguments [[autodoc]] TrainingArguments - all ## Seq2SeqTrainingArguments [[autodoc]] Seq2SeqTrainingArguments - all ## Checkpoints By default, [`Trainer`] will save all checkpoints in the `output_dir` you set in the [`TrainingArguments`] you are using. Those will go in subfolder named `checkpoint-xxx` with xxx being the step at which the training was at. Resuming training from a checkpoint can be done when calling [`Trainer.train`] with either: - `resume_from_checkpoint=True` which will resume training from the latest checkpoint - `resume_from_checkpoint=checkpoint_dir` which will resume training from the specific checkpoint in the directory passed. In addition, you can easily save your checkpoints on the Model Hub when using `push_to_hub=True`. By default, all the models saved in intermediate checkpoints are saved in different commits, but not the optimizer state. You can adapt the `hub-strategy` value of your [`TrainingArguments`] to either: - `"checkpoint"`: the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with `trainer.train(resume_from_checkpoint="output_dir/last-checkpoint")`. - `"all_checkpoints"`: all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository) ## Logging By default [`Trainer`] will use `logging.INFO` for the main process and `logging.WARNING` for the replicas if any. These defaults can be overridden to use any of the 5 `logging` levels with [`TrainingArguments`]'s arguments: - `log_level` - for the main process - `log_level_replica` - for the replicas Further, if [`TrainingArguments`]'s `log_on_each_node` is set to `False` only the main node will use the log level settings for its main process, all other nodes will use the log level settings for replicas. Note that [`Trainer`] is going to set `transformers`'s log level separately for each node in its [`Trainer.__init__`]. So you may want to set this sooner (see the next example) if you tap into other `transformers` functionality before creating the [`Trainer`] object. Here is an example of how this can be used in an application: thon [] logger = logging.getLogger(__name__) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) # set the main code and the modules it uses to the same log-level according to the node log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) trainer = Trainer() And then if you only want to see warnings on the main node and all other nodes to not print any most likely duplicated warnings you could run it as: ```bash my_app.py --log_level warning --log_level_replica error In the multi-node environment if you also don't want the logs to repeat for each node's main process, you will want to change the above to: ```bash my_app.py --log_level warning --log_level_replica error --log_on_each_node 0 and then only the main process of the first node will log at the "warning" level, and all other processes on the main node and all processes on other nodes will log at the "error" level. If you need your application to be as quiet as possible you could do: ```bash my_app.py --log_level error --log_level_replica error --log_on_each_node 0 (add `--log_on_each_node 0` if on multi-node environment) ## Randomness When resuming from a checkpoint generated by [`Trainer`] all efforts are made to restore the _python_, _numpy_ and _pytorch_ RNG states to the same states as they were at the moment of saving that checkpoint, which should make the "stop and resume" style of training as close as possible to non-stop training. However, due to various default non-deterministic pytorch settings this might not fully work. If you want full determinism please refer to [Controlling sources of randomness](https://pytorch.org/docs/stable/notes/randomness). As explained in the document, that some of those settings that make things deterministic (.e.g., `torch.backends.cudnn.deterministic`) may slow things down, therefore this can't be done by default, but you can enable those yourself if needed. ## Specific GPUs Selection Let's discuss how you can tell your program which GPUs are to be used and in what order. When using [`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) to use only a subset of your GPUs, you simply specify the number of GPUs to use. For example, if you have 4 GPUs, but you wish to use the first 2 you can do: ```bash python -m torch.distributed.launch --nproc_per_node=2 trainer-program.py if you have either [`accelerate`](https://github.com/huggingface/accelerate) or [`deepspeed`](https://github.com/microsoft/DeepSpeed) installed you can also accomplish the same by using one of: ```bash accelerate launch --num_processes 2 trainer-program.py ```bash deepspeed --num_gpus 2 trainer-program.py You don't need to use the Accelerate or [the Deepspeed integration](deepspeed) features to use these launchers. Until now you were able to tell the program how many GPUs to use. Now let's discuss how to select specific GPUs and control their order. The following environment variables help you control which GPUs to use and their order. **`CUDA_VISIBLE_DEVICES`** If you have multiple GPUs and you'd like to use only 1 or a few of those GPUs, set the environment variable `CUDA_VISIBLE_DEVICES` to a list of the GPUs to be used. For example, let's say you have 4 GPUs: 0, 1, 2 and 3. To run only on the physical GPUs 0 and 2, you can do: ```bash CUDA_VISIBLE_DEVICES=0,2 python -m torch.distributed.launch trainer-program.py So now pytorch will see only 2 GPUs, where your physical GPUs 0 and 2 are mapped to `cuda:0` and `cuda:1` correspondingly. You can even change their order: ```bash CUDA_VISIBLE_DEVICES=2,0 python -m torch.distributed.launch trainer-program.py Here your physical GPUs 0 and 2 are mapped to `cuda:1` and `cuda:0` correspondingly. The above examples were all for `DistributedDataParallel` use pattern, but the same method works for [`DataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) as well: ```bash CUDA_VISIBLE_DEVICES=2,0 python trainer-program.py To emulate an environment without GPUs simply set this environment variable to an empty value like so: ```bash CUDA_VISIBLE_DEVICES= python trainer-program.py As with any environment variable you can, of course, export those instead of adding these to the command line, as in: ```bash export CUDA_VISIBLE_DEVICES=0,2 python -m torch.distributed.launch trainer-program.py but this approach can be confusing since you may forget you set up the environment variable earlier and not understand why the wrong GPUs are used. Therefore, it's a common practice to set the environment variable just for a specific run on the same command line as it's shown in most examples of this section. **`CUDA_DEVICE_ORDER`** There is an additional environment variable `CUDA_DEVICE_ORDER` that controls how the physical devices are ordered. The two choices are: 1. ordered by PCIe bus IDs (matches `nvidia-smi`'s order) - this is the default. ```bash export CUDA_DEVICE_ORDER=PCI_BUS_ID 2. ordered by GPU compute capabilities ```bash export CUDA_DEVICE_ORDER=FASTEST_FIRST Most of the time you don't need to care about this environment variable, but it's very helpful if you have a lopsided setup where you have an old and a new GPUs physically inserted in such a way so that the slow older card appears to be first. One way to fix that is to swap the cards. But if you can't swap the cards (e.g., if the cooling of the devices gets impacted) then setting `CUDA_DEVICE_ORDER=FASTEST_FIRST` will always put the newer faster card first. It'll be somewhat confusing though since `nvidia-smi` will still report them in the PCIe order. The other solution to swapping the order is to use: ```bash export CUDA_VISIBLE_DEVICES=1,0 In this example we are working with just 2 GPUs, but of course the same would apply to as many GPUs as your computer has. Also if you do set this environment variable it's the best to set it in your `~/.bashrc` file or some other startup config file and forget about it. ## Trainer Integrations The [`Trainer`] has been extended to support libraries that may dramatically improve your training time and fit much bigger models. Currently it supports third party solutions, [DeepSpeed](https://github.com/microsoft/DeepSpeed) and [PyTorch FSDP](https://pytorch.org/docs/stable/fsdp.html), which implement parts of the paper [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, by Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He](https://arxiv.org/abs/1910.02054). This provided support is new and experimental as of this writing. While the support for DeepSpeed and PyTorch FSDP is active and we welcome issues around it, we don't support the FairScale integration anymore since it has been integrated in PyTorch main (see the [PyTorch FSDP integration](#pytorch-fully-sharded-data-parallel)) ### CUDA Extension Installation Notes As of this writing, Deepspeed require compilation of CUDA C++ code, before it can be used. While all installation issues should be dealt with through the corresponding GitHub Issues of [Deepspeed](https://github.com/microsoft/DeepSpeed/issues), there are a few common issues that one may encounter while building any PyTorch extension that needs to build CUDA extensions. Therefore, if you encounter a CUDA-related build issue while doing the following: ```bash pip install deepspeed please, read the following notes first. In these notes we give examples for what to do when `pytorch` has been built with CUDA `10.2`. If your situation is different remember to adjust the version number to the one you are after. #### Possible problem #1 While, Pytorch comes with its own CUDA toolkit, to build these two projects you must have an identical version of CUDA installed system-wide. For example, if you installed `pytorch` with `cudatoolkit==10.2` in the Python environment, you also need to have CUDA `10.2` installed system-wide. The exact location may vary from system to system, but `/usr/local/cuda-10.2` is the most common location on many Unix systems. When CUDA is correctly set up and added to the `PATH` environment variable, one can find the installation location by doing: ```bash which nvcc If you don't have CUDA installed system-wide, install it first. You will find the instructions by using your favorite search engine. For example, if you're on Ubuntu you may want to search for: [ubuntu cuda 10.2 install](https://www.google.com/search?q=ubuntu+cuda+10.2+install). #### Possible problem #2 Another possible common problem is that you may have more than one CUDA toolkit installed system-wide. For example you may have: ```bash /usr/local/cuda-10.2 /usr/local/cuda-11.0 Now, in this situation you need to make sure that your `PATH` and `LD_LIBRARY_PATH` environment variables contain the correct paths to the desired CUDA version. Typically, package installers will set these to contain whatever the last version was installed. If you encounter the problem, where the package build fails because it can't find the right CUDA version despite you having it installed system-wide, it means that you need to adjust the 2 aforementioned environment variables. First, you may look at their contents: ```bash echo $PATH echo $LD_LIBRARY_PATH so you get an idea of what is inside. It's possible that `LD_LIBRARY_PATH` is empty. `PATH` lists the locations of where executables can be found and `LD_LIBRARY_PATH` is for where shared libraries are to looked for. In both cases, earlier entries have priority over the later ones. `:` is used to separate multiple entries. Now, to tell the build program where to find the specific CUDA toolkit, insert the desired paths to be listed first by doing: ```bash export PATH=/usr/local/cuda-10.2/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH Note that we aren't overwriting the existing values, but prepending instead. Of course, adjust the version number, the full path if need be. Check that the directories you assign actually do exist. `lib64` sub-directory is where the various CUDA `.so` objects, like `libcudart.so` reside, it's unlikely that your system will have it named differently, but if it is adjust it to reflect your reality. #### Possible problem #3 Some older CUDA versions may refuse to build with newer compilers. For example, you my have `gcc-9` but it wants `gcc-7`. There are various ways to go about it. If you can install the latest CUDA toolkit it typically should support the newer compiler. Alternatively, you could install the lower version of the compiler in addition to the one you already have, or you may already have it but it's not the default one, so the build system can't see it. If you have `gcc-7` installed but the build system complains it can't find it, the following might do the trick: ```bash sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++ Here, we are making a symlink to `gcc-7` from `/usr/local/cuda-10.2/bin/gcc` and since `/usr/local/cuda-10.2/bin/` should be in the `PATH` environment variable (see the previous problem's solution), it should find `gcc-7` (and `g++7`) and then the build will succeed. As always make sure to edit the paths in the example to match your situation. ### PyTorch Fully Sharded Data parallel To accelerate training huge models on larger batch sizes, we can use a fully sharded data parallel model. This type of data parallel paradigm enables fitting more data and larger models by sharding the optimizer states, gradients and parameters. To read more about it and the benefits, check out the [Fully Sharded Data Parallel blog](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/). We have integrated the latest PyTorch's Fully Sharded Data Parallel (FSDP) training feature. All you need to do is enable it through the config. **Required PyTorch version for FSDP support**: PyTorch >=2.1.0 **Usage**: - Make sure you have added the distributed launcher `-m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE` if you haven't been using it already. - **Sharding Strategy**: - FULL_SHARD : Shards optimizer states + gradients + model parameters across data parallel workers/GPUs. For this, add `--fsdp full_shard` to the command line arguments. - SHARD_GRAD_OP : Shards optimizer states + gradients across data parallel workers/GPUs. For this, add `--fsdp shard_grad_op` to the command line arguments. - NO_SHARD : No sharding. For this, add `--fsdp no_shard` to the command line arguments. - HYBRID_SHARD : No sharding. For this, add `--fsdp hybrid_shard` to the command line arguments. - HYBRID_SHARD_ZERO2 : No sharding. For this, add `--fsdp hybrid_shard_zero2` to the command line arguments. - To offload the parameters and gradients to the CPU, add `--fsdp "full_shard offload"` or `--fsdp "shard_grad_op offload"` to the command line arguments. - To automatically recursively wrap layers with FSDP using `default_auto_wrap_policy`, add `--fsdp "full_shard auto_wrap"` or `--fsdp "shard_grad_op auto_wrap"` to the command line arguments. - To enable both CPU offloading and auto wrapping, add `--fsdp "full_shard offload auto_wrap"` or `--fsdp "shard_grad_op offload auto_wrap"` to the command line arguments. - Remaining FSDP config is passed via `--fsdp_config `. It is either a location of FSDP json config file (e.g., `fsdp_config.json`) or an already loaded json file as `dict`. - If auto wrapping is enabled, you can either use transformer based auto wrap policy or size based auto wrap policy. - For transformer based auto wrap policy, it is recommended to specify `transformer_layer_cls_to_wrap` in the config file. If not specified, the default value is `model._no_split_modules` when available. This specifies the list of transformer layer class name (case-sensitive) to wrap ,e.g, [`BertLayer`], [`GPTJBlock`], [`T5Block`] . This is important because submodules that share weights (e.g., embedding layer) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer based models. - For size based auto wrap policy, please add `min_num_params` in the config file. It specifies FSDP's minimum number of parameters for auto wrapping. - `backward_prefetch` can be specified in the config file. It controls when to prefetch next set of parameters. `backward_pre` and `backward_pos` are available options. For more information refer `torch.distributed.fsdp.fully_sharded_data_parallel.BackwardPrefetch` - `forward_prefetch` can be specified in the config file. It controls when to prefetch next set of parameters. If `"True"`, FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass. - `limit_all_gathers` can be specified in the config file. If `"True"`, FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers. - `activation_checkpointing` can be specified in the config file. If `"True"`, FSDP activation checkpointing is a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time for reduced memory usage. - `use_orig_params` can be specified in the config file. If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable paramteres. Useful in cases such as parameter-efficient fine-tuning. This also enables to have different optimizer param groups. This should be `True` when creating optimizer object before preparing/wrapping the model with FSDP. Please refer this [blog](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019). **Saving and loading** Saving entire intermediate checkpoints using `FULL_STATE_DICT` state_dict_type with CPU offloading on rank 0 takes a lot of time and often results in NCCL Timeout errors due to indefinite hanging during broadcasting. However, at the end of training, we want the whole model state dict instead of the sharded state dict which is only compatible with FSDP. Use `SHARDED_STATE_DICT` (default) state_dict_type to save the intermediate checkpoints and optimizer states in this format recommended by the PyTorch team. Saving the final checkpoint in transformers format using default `safetensors` format requires below changes. thon if trainer.is_fsdp_enabled: trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT") trainer.save_model(script_args.output_dir) **Few caveats to be aware of** - it is incompatible with `generate`, thus is incompatible with `--predict_with_generate` in all seq2seq/clm scripts (translation/summarization/clm etc.). Please refer issue [#21667](https://github.com/huggingface/transformers/issues/21667) ### PyTorch/XLA Fully Sharded Data parallel For all the TPU users, great news! PyTorch/XLA now supports FSDP. All the latest Fully Sharded Data Parallel (FSDP) training are supported. For more information refer to the [Scaling PyTorch models on Cloud TPUs with FSDP](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) and [PyTorch/XLA implementation of FSDP](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp) All you need to do is enable it through the config. **Required PyTorch/XLA version for FSDP support**: >=2.0 **Usage**: Pass `--fsdp "full shard"` along with following changes to be made in `--fsdp_config `: - `xla` should be set to `True` to enable PyTorch/XLA FSDP. - `xla_fsdp_settings` The value is a dictionary which stores the XLA FSDP wrapping parameters. For a complete list of options, please see [here]( https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_fully_sharded_data_parallel.py). - `xla_fsdp_grad_ckpt`. When `True`, uses gradient checkpointing over each nested XLA FSDP wrapped layer. This setting can only be used when the xla flag is set to true, and an auto wrapping policy is specified through `min_num_params` or `transformer_layer_cls_to_wrap`. - You can either use transformer based auto wrap policy or size based auto wrap policy. - For transformer based auto wrap policy, it is recommended to specify `transformer_layer_cls_to_wrap` in the config file. If not specified, the default value is `model._no_split_modules` when available. This specifies the list of transformer layer class name (case-sensitive) to wrap ,e.g, [`BertLayer`], [`GPTJBlock`], [`T5Block`] . This is important because submodules that share weights (e.g., embedding layer) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer based models. - For size based auto wrap policy, please add `min_num_params` in the config file. It specifies FSDP's minimum number of parameters for auto wrapping. ### Using Trainer for accelerated PyTorch Training on Mac With PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac. Apple's Metal Performance Shaders (MPS) as a backend for PyTorch enables this and can be used via the new `"mps"` device. This will map computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. For more information please refer official documents [Introducing Accelerated PyTorch Training on Mac](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/) and [MPS BACKEND](https://pytorch.org/docs/stable/notes/mps.html). We strongly recommend to install PyTorch >= 1.13 (nightly version at the time of writing) on your MacOS machine. It has major fixes related to model correctness and performance improvements for transformer based models. Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details. **Benefits of Training and Inference using Apple Silicon Chips** 1. Enables users to train larger networks or batch sizes locally 2. Reduces data retrieval latency and provides the GPU with direct access to the full memory store due to unified memory architecture. Therefore, improving end-to-end performance. 3. Reduces costs associated with cloud-based development or the need for additional local GPUs. **Pre-requisites**: To install torch with mps support, please follow this nice medium article [GPU-Acceleration Comes to PyTorch on M1 Macs](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1). **Usage**: `mps` device will be used by default if available similar to the way `cuda` device is used. Therefore, no action from user is required. For example, you can run the official Glue text classififcation task (from the root folder) using Apple Silicon GPU with below command: ```bash export TASK_NAME=mrpc python examples/pytorch/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir **A few caveats to be aware of** 1. Some PyTorch operations have not been implemented in mps and will throw an error. One way to get around that is to set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1`, which will fallback to CPU for these operations. It still throws a UserWarning however. 2. Distributed setups `gloo` and `nccl` are not working with `mps` device. This means that currently only single GPU of `mps` device type can be used. Finally, please, remember that, 🤗 `Trainer` only integrates MPS backend, therefore if you have any problems or questions with regards to MPS backend usage, please, file an issue with [PyTorch GitHub](https://github.com/pytorch/pytorch/issues). ## Using Accelerate Launcher with Trainer Accelerate now powers Trainer. In terms of what users should expect: - They can keep using the Trainer ingterations such as FSDP, DeepSpeed vis trainer arguments without any changes on their part. - They can now use Accelerate Launcher with Trainer (recommended). Steps to use Accelerate Launcher with Trainer: 1. Make sure 🤗 Accelerate is installed, you can't use the `Trainer` without it anyway. If not `pip install accelerate`. You may also need to update your version of Accelerate: `pip install accelerate --upgrade` 2. Run `accelerate config` and fill the questionnaire. Below are example accelerate configs: a. DDP Multi-node Multi-GPU config: ```yaml compute_environment: LOCAL_MACHINE distributed_type: MULTI_GPU downcast_bf16: 'no' gpu_ids: all machine_rank: 0 #change rank as per the node main_process_ip: 192.168.20.1 main_process_port: 9898 main_training_function: main mixed_precision: fp16 num_machines: 2 num_processes: 8 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false b. FSDP config: ```yaml compute_environment: LOCAL_MACHINE distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch_policy: BACKWARD_PRE fsdp_forward_prefetch: true fsdp_offload_params: false fsdp_sharding_strategy: 1 fsdp_state_dict_type: FULL_STATE_DICT fsdp_sync_module_states: true fsdp_transformer_layer_cls_to_wrap: BertLayer fsdp_use_orig_params: true machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false c. DeepSpeed config pointing to a file: ```yaml compute_environment: LOCAL_MACHINE deepspeed_config: deepspeed_config_file: /home/user/configs/ds_zero3_config.json zero3_init_flag: true distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false d. DeepSpeed config using accelerate plugin: ```yaml compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 0.7 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: true zero_stage: 2 distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false 3. Run the Trainer script with args other than the ones handled above by accelerate config or launcher args. Below is an example to run `run_glue.py` using `accelerate launcher` with FSDP config from above. ```bash cd transformers accelerate launch \ ./examples/pytorch/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 16 \ --learning_rate 5e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir 4. You can also directly use the cmd args for `accelerate launch`. Above example would map to: ```bash cd transformers accelerate launch --num_processes=2 \ --use_fsdp \ --mixed_precision=bf16 \ --fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP \ --fsdp_transformer_layer_cls_to_wrap="BertLayer" \ --fsdp_sharding_strategy=1 \ --fsdp_state_dict_type=FULL_STATE_DICT \ ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 16 \ --learning_rate 5e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir For more information, please refer the 🤗 Accelerate CLI guide: [Launching your 🤗 Accelerate scripts](https://huggingface.co/docs/accelerate/basic_tutorials/launch). Sections that were moved: [ DeepSpeed | Installation | Deployment with multiple GPUs | Deployment with one GPU | Deployment in Notebooks | Configuration | Passing Configuration | Shared Configuration | ZeRO | ZeRO-2 Config | ZeRO-3 Config | NVMe Support | ZeRO-2 vs ZeRO-3 Performance | ZeRO-2 Example | ZeRO-3 Example | Optimizer | Scheduler | fp32 Precision | Automatic Mixed Precision | Batch Size | Gradient Accumulation | Gradient Clipping | Getting The Model Weights Out ] ## Boost your fine-tuning performances using NEFTune NEFTune is a technique to boost the performance of chat models and was introduced by the paper “NEFTune: Noisy Embeddings Improve Instruction Finetuning” from Jain et al. it consists of adding noise to the embedding vectors during training. According to the abstract of the paper: > Standard finetuning of LLaMA-2-7B using Alpaca achieves 29.79% on AlpacaEval, which rises to 64.69% using noisy embeddings. NEFTune also improves over strong baselines on modern instruction datasets. Models trained with Evol-Instruct see a 10% improvement, with ShareGPT an 8% improvement, and with OpenPlatypus an 8% improvement. Even powerful models further refined with RLHF such as LLaMA-2-Chat benefit from additional training with NEFTune. To use it in `Trainer` simply pass `neftune_noise_alpha` when creating your `TrainingArguments` instance. Note that to avoid any surprising behaviour, NEFTune is disabled after training to retrieve back the original behaviour of the embedding layer. thon from transformers import Trainer, TrainingArguments args = TrainingArguments(, neftune_noise_alpha=0.1) trainer = Trainer(, args=args) trainer.train()
main_classes/onnx.md
# Exporting 🤗 Transformers models to ONNX 🤗 Transformers provides a `transformers.onnx` package that enables you to convert model checkpoints to an ONNX graph by leveraging configuration objects. See the [guide](../serialization) on exporting 🤗 Transformers models for more details. ## ONNX Configurations We provide three abstract classes that you should inherit from, depending on the type of model architecture you wish to export: * Encoder-based models inherit from [`~onnx.config.OnnxConfig`] * Decoder-based models inherit from [`~onnx.config.OnnxConfigWithPast`] * Encoder-decoder models inherit from [`~onnx.config.OnnxSeq2SeqConfigWithPast`] ### OnnxConfig [[autodoc]] onnx.config.OnnxConfig ### OnnxConfigWithPast [[autodoc]] onnx.config.OnnxConfigWithPast ### OnnxSeq2SeqConfigWithPast [[autodoc]] onnx.config.OnnxSeq2SeqConfigWithPast ## ONNX Features Each ONNX configuration is associated with a set of _features_ that enable you to export models for different types of topologies or tasks. ### FeaturesManager [[autodoc]] onnx.features.FeaturesManager
main_classes/optimizer_schedules.md
# Optimization The `.optimization` module provides: - an optimizer with weight decay fixed that can be used to fine-tuned models, and - several schedules in the form of schedule objects that inherit from `_LRSchedule`: - a gradient accumulation class to accumulate the gradients of multiple batches ## AdamW (PyTorch) [[autodoc]] AdamW ## AdaFactor (PyTorch) [[autodoc]] Adafactor ## AdamWeightDecay (TensorFlow) [[autodoc]] AdamWeightDecay [[autodoc]] create_optimizer ## Schedules ### Learning Rate Schedules (Pytorch) [[autodoc]] SchedulerType [[autodoc]] get_scheduler [[autodoc]] get_constant_schedule [[autodoc]] get_constant_schedule_with_warmup [[autodoc]] get_cosine_schedule_with_warmup [[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup [[autodoc]] get_linear_schedule_with_warmup [[autodoc]] get_polynomial_decay_schedule_with_warmup [[autodoc]] get_inverse_sqrt_schedule ### Warmup (TensorFlow) [[autodoc]] WarmUp ## Gradient Strategies ### GradientAccumulator (TensorFlow) [[autodoc]] GradientAccumulator
main_classes/feature_extractor.md
# Feature Extractor A feature extractor is in charge of preparing input features for audio or vision models. This includes feature extraction from sequences, e.g., pre-processing audio files to generate Log-Mel Spectrogram features, feature extraction from images, e.g., cropping image files, but also padding, normalization, and conversion to NumPy, PyTorch, and TensorFlow tensors. ## FeatureExtractionMixin [[autodoc]] feature_extraction_utils.FeatureExtractionMixin - from_pretrained - save_pretrained ## SequenceFeatureExtractor [[autodoc]] SequenceFeatureExtractor - pad ## BatchFeature [[autodoc]] BatchFeature ## ImageFeatureExtractionMixin [[autodoc]] image_utils.ImageFeatureExtractionMixin
main_classes/text_generation.md
# Generation Each framework has a generate method for text generation implemented in their respective `GenerationMixin` class: - PyTorch [`~generation.GenerationMixin.generate`] is implemented in [`~generation.GenerationMixin`]. - TensorFlow [`~generation.TFGenerationMixin.generate`] is implemented in [`~generation.TFGenerationMixin`]. - Flax/JAX [`~generation.FlaxGenerationMixin.generate`] is implemented in [`~generation.FlaxGenerationMixin`]. Regardless of your framework of choice, you can parameterize the generate method with a [`~generation.GenerationConfig`] class instance. Please refer to this class for the complete list of generation parameters, which control the behavior of the generation method. To learn how to inspect a model's generation configuration, what are the defaults, how to change the parameters ad hoc, and how to create and save a customized generation configuration, refer to the [text generation strategies guide](../generation_strategies). The guide also explains how to use related features, like token streaming. ## GenerationConfig [[autodoc]] generation.GenerationConfig - from_pretrained - from_model_config - save_pretrained ## GenerationMixin [[autodoc]] generation.GenerationMixin - generate - compute_transition_scores - greedy_search - sample - beam_search - beam_sample - contrastive_search - group_beam_search - constrained_beam_search ## TFGenerationMixin [[autodoc]] generation.TFGenerationMixin - generate - compute_transition_scores ## FlaxGenerationMixin [[autodoc]] generation.FlaxGenerationMixin - generate
main_classes/configuration.md
# Configuration The base class [`PretrainedConfig`] implements the common methods for loading/saving a configuration either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS S3 repository). Each derived config class implements model specific attributes. Common attributes present in all config classes are: `hidden_size`, `num_attention_heads`, and `num_hidden_layers`. Text models further implement: `vocab_size`. ## PretrainedConfig [[autodoc]] PretrainedConfig - push_to_hub - all
main_classes/callback.md
# Callbacks Callbacks are objects that can customize the behavior of the training loop in the PyTorch [`Trainer`] (this feature is not yet implemented in TensorFlow) that can inspect the training loop state (for progress reporting, logging on TensorBoard or other ML platforms) and take decisions (like early stopping). Callbacks are "read only" pieces of code, apart from the [`TrainerControl`] object they return, they cannot change anything in the training loop. For customizations that require changes in the training loop, you should subclass [`Trainer`] and override the methods you need (see [trainer](trainer) for examples). By default, `TrainingArguments.report_to` is set to `"all"`, so a [`Trainer`] will use the following callbacks. - [`DefaultFlowCallback`] which handles the default behavior for logging, saving and evaluation. - [`PrinterCallback`] or [`ProgressCallback`] to display progress and print the logs (the first one is used if you deactivate tqdm through the [`TrainingArguments`], otherwise it's the second one). - [`~integrations.TensorBoardCallback`] if tensorboard is accessible (either through PyTorch >= 1.4 or tensorboardX). - [`~integrations.WandbCallback`] if [wandb](https://www.wandb.com/) is installed. - [`~integrations.CometCallback`] if [comet_ml](https://www.comet.ml/site/) is installed. - [`~integrations.MLflowCallback`] if [mlflow](https://www.mlflow.org/) is installed. - [`~integrations.NeptuneCallback`] if [neptune](https://neptune.ai/) is installed. - [`~integrations.AzureMLCallback`] if [azureml-sdk](https://pypi.org/project/azureml-sdk/) is installed. - [`~integrations.CodeCarbonCallback`] if [codecarbon](https://pypi.org/project/codecarbon/) is installed. - [`~integrations.ClearMLCallback`] if [clearml](https://github.com/allegroai/clearml) is installed. - [`~integrations.DagsHubCallback`] if [dagshub](https://dagshub.com/) is installed. - [`~integrations.FlyteCallback`] if [flyte](https://flyte.org/) is installed. - [`~integrations.DVCLiveCallback`] if [dvclive](https://dvc.org/doc/dvclive) is installed. If a package is installed but you don't wish to use the accompanying integration, you can change `TrainingArguments.report_to` to a list of just those integrations you want to use (e.g. `["azure_ml", "wandb"]`). The main class that implements callbacks is [`TrainerCallback`]. It gets the [`TrainingArguments`] used to instantiate the [`Trainer`], can access that Trainer's internal state via [`TrainerState`], and can take some actions on the training loop via [`TrainerControl`]. ## Available Callbacks Here is the list of the available [`TrainerCallback`] in the library: [[autodoc]] integrations.CometCallback - setup [[autodoc]] DefaultFlowCallback [[autodoc]] PrinterCallback [[autodoc]] ProgressCallback [[autodoc]] EarlyStoppingCallback [[autodoc]] integrations.TensorBoardCallback [[autodoc]] integrations.WandbCallback - setup [[autodoc]] integrations.MLflowCallback - setup [[autodoc]] integrations.AzureMLCallback [[autodoc]] integrations.CodeCarbonCallback [[autodoc]] integrations.NeptuneCallback [[autodoc]] integrations.ClearMLCallback [[autodoc]] integrations.DagsHubCallback [[autodoc]] integrations.FlyteCallback [[autodoc]] integrations.DVCLiveCallback - setup ## TrainerCallback [[autodoc]] TrainerCallback Here is an example of how to register a custom callback with the PyTorch [`Trainer`]: thon class MyCallback(TrainerCallback): "A callback that prints a message at the beginning of training" def on_train_begin(self, args, state, control, **kwargs): print("Starting training") trainer = Trainer( model, args, train_dataset=train_dataset, eval_dataset=eval_dataset, callbacks=[MyCallback], # We can either pass the callback class this way or an instance of it (MyCallback()) ) Another way to register a callback is to call `trainer.add_callback()` as follows: thon trainer = Trainer() trainer.add_callback(MyCallback) # Alternatively, we can pass an instance of the callback class trainer.add_callback(MyCallback()) ## TrainerState [[autodoc]] TrainerState ## TrainerControl [[autodoc]] TrainerControl
main_classes/quantization.md
# Quantize 🤗 Transformers models ## AWQ integration AWQ method has been introduced in the [*AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration* paper](https://arxiv.org/abs/2306.00978). With AWQ you can run models in 4-bit precision, while preserving its original quality (i.e. no performance degradation) with a superior throughput that other quantization methods presented below - reaching similar throughput as pure `float16` inference. We now support inference with any AWQ model, meaning anyone can load and use AWQ weights that are pushed on the Hub or saved locally. Note that using AWQ requires to have access to a NVIDIA GPU. CPU inference is not supported yet. ### Quantizing a model We advise users to look at different existing tools in the ecosystem to quantize their models with AWQ algorithm, such as: - [`llm-awq`](https://github.com/mit-han-lab/llm-awq) from MIT Han Lab - [`autoawq`](https://github.com/casper-hansen/AutoAWQ) from [`casper-hansen`](https://github.com/casper-hansen) - Intel neural compressor from Intel - through [`optimum-intel`](https://huggingface.co/docs/optimum/main/en/intel/optimization_inc) Many other tools might exist in the ecosystem, please feel free to open a PR to add them to the list. Currently the integration with 🤗 Transformers is only available for models that have been quantized using `autoawq` library and `llm-awq`. Most of the models quantized with `auto-awq` can be found under [`TheBloke`](https://huggingface.co/TheBloke) namespace of 🤗 Hub, and to quantize models with `llm-awq` please refer to the [`convert_to_hf.py`](https://github.com/mit-han-lab/llm-awq/blob/main/examples/convert_to_hf.py) script in the examples folder of [`llm-awq`](https://github.com/mit-han-lab/llm-awq/). ### Load a quantized model You can load a quantized model from the Hub using the `from_pretrained` method. Make sure that the pushed weights are quantized, by checking that the attribute `quantization_config` is present in the model's configuration file (`configuration.json`). You can confirm that the model is quantized in the AWQ format by checking the field `quantization_config.quant_method` which should be set to `"awq"`. Note that loading the model will set other weights in `float16` by default for performance reasons. If you want to change that behavior, you can pass `torch_dtype` argument to `torch.float32` or `torch.bfloat16`. You can find in the sections below some example snippets and notebook. ## Example usage First, you need to install [`autoawq`](https://github.com/casper-hansen/AutoAWQ) library ```bash pip install autoawq thon from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda:0") In case you first load your model on CPU, make sure to move it to your GPU device before using thon from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda:0") ### Combining AWQ and Flash Attention You can combine AWQ quantization with Flash Attention to get a model that is both quantized and faster. Simply load the model using `from_pretrained` and pass `use_flash_attention_2=True` argument. thon from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("TheBloke/zephyr-7B-alpha-AWQ", use_flash_attention_2=True, device_map="cuda:0") ### Benchmarks We performed some speed, throughput and latency benchmarks using [`optimum-benchmark`](https://github.com/huggingface/optimum-benchmark) library. Note at that time of writing this documentation section, the available quantization methods were: `awq`, `gptq` and `bitsandbytes`. The benchmark was run on a NVIDIA-A100 instance and the model used was [`TheBloke/Mistral-7B-v0.1-AWQ`](https://huggingface.co/TheBloke/Mistral-7B-v0.1-AWQ) for the AWQ model, [`TheBloke/Mistral-7B-v0.1-GPTQ`](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ) for the GPTQ model. We also benchmarked it against `bitsandbytes` quantization methods and native `float16` model. Some results are shown below: You can find the full results together with packages versions in [this link](https://github.com/huggingface/optimum-benchmark/tree/main/examples/running-mistrals). From the results it appears that AWQ quantization method is the fastest quantization method for inference, text generation and among the lowest peak memory for text generation. However, AWQ seems to have the largest forward latency per batch size. ### Google colab demo Check out how to use this integration throughout this [Google Colab demo](https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY)! ### AwqConfig [[autodoc]] AwqConfig ## `AutoGPTQ` Integration 🤗 Transformers has integrated `optimum` API to perform GPTQ quantization on language models. You can load and quantize your model in 8, 4, 3 or even 2 bits without a big drop of performance and faster inference speed! This is supported by most GPU hardwares. To learn more about the quantization model, check out: - the [GPTQ](https://arxiv.org/pdf/2210.17323.pdf) paper - the `optimum` [guide](https://huggingface.co/docs/optimum/llm_quantization/usage_guides/quantization) on GPTQ quantization - the [`AutoGPTQ`](https://github.com/PanQiWei/AutoGPTQ) library used as the backend ### Requirements You need to have the following requirements installed to run the code below: - Install latest `AutoGPTQ` library `pip install auto-gptq` - Install latest `optimum` from source `pip install git+https://github.com/huggingface/optimum.git` - Install latest `transformers` from source `pip install git+https://github.com/huggingface/transformers.git` - Install latest `accelerate` library `pip install --upgrade accelerate` Note that GPTQ integration supports for now only text models and you may encounter unexpected behaviour for vision, speech or multi-modal models. ### Load and quantize a model GPTQ is a quantization method that requires weights calibration before using the quantized models. If you want to quantize transformers model from scratch, it might take some time before producing the quantized model (~5 min on a Google colab for `facebook/opt-350m` model). Hence, there are two different scenarios where you want to use GPTQ-quantized models. The first use case would be to load models that has been already quantized by other users that are available on the Hub, the second use case would be to quantize your model from scratch and save it or push it on the Hub so that other users can also use it. #### GPTQ Configuration In order to load and quantize a model, you need to create a [`GPTQConfig`]. You need to pass the number of `bits`, a `dataset` in order to calibrate the quantization and the `tokenizer` of the model in order prepare the dataset. thon model_id = "facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_id) gptq_config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer) Note that you can pass your own dataset as a list of string. However, it is highly recommended to use the dataset from the GPTQ paper. thon dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."] quantization = GPTQConfig(bits=4, dataset = dataset, tokenizer=tokenizer) #### Quantization You can quantize a model by using `from_pretrained` and setting the `quantization_config`. thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config) Note that you will need a GPU to quantize a model. We will put the model in the cpu and move the modules back and forth to the gpu in order to quantize them. If you want to maximize your gpus usage while using cpu offload, you can set `device_map = "auto"`. thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config) Note that disk offload is not supported. Furthermore, if you are out of memory because of the dataset, you may have to pass `max_memory` in `from_pretained`. Checkout this [guide](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map) to learn more about `device_map` and `max_memory`. GPTQ quantization only works for text model for now. Futhermore, the quantization process can a lot of time depending on one's hardware (175B model = 4 gpu hours using NVIDIA A100). Please check on the hub if there is not a GPTQ quantized version of the model. If not, you can submit a demand on github. ### Push quantized model to 🤗 Hub You can push the quantized model like any 🤗 model to Hub with `push_to_hub`. The quantization config will be saved and pushed along the model. thon quantized_model.push_to_hub("opt-125m-gptq") tokenizer.push_to_hub("opt-125m-gptq") If you want to save your quantized model on your local machine, you can also do it with `save_pretrained`: thon quantized_model.save_pretrained("opt-125m-gptq") tokenizer.save_pretrained("opt-125m-gptq") Note that if you have quantized your model with a `device_map`, make sure to move the entire model to one of your gpus or the `cpu` before saving it. thon quantized_model.to("cpu") quantized_model.save_pretrained("opt-125m-gptq") ### Load a quantized model from the 🤗 Hub You can load a quantized model from the Hub by using `from_pretrained`. Make sure that the pushed weights are quantized, by checking that the attribute `quantization_config` is present in the model configuration object. thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq") If you want to load a model faster and without allocating more memory than needed, the `device_map` argument also works with quantized model. Make sure that you have `accelerate` library installed. thon from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto") ### Exllama kernels for faster inference For 4-bit model, you can use the exllama kernels in order to a faster inference speed. It is activated by default. You can change that behavior by passing `use_exllama` in [`GPTQConfig`]. This will overwrite the quantization config stored in the config. Note that you will only be able to overwrite the attributes related to the kernels. Furthermore, you need to have the entire model on gpus if you want to use exllama kernels. Also, you can perform CPU inference using Auto-GPTQ for Auto-GPTQ version > 0.4.2 by passing `device_map` = "cpu". For CPU inference, you have to pass `use_exllama = False` in the `GPTQConfig.` import torch gptq_config = GPTQConfig(bits=4) model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config=gptq_config) With the release of the exllamav2 kernels, you can get faster inference speed compared to the exllama kernels. You just need to pass `exllama_config={"version": 2}` in [`GPTQConfig`]: import torch gptq_config = GPTQConfig(bits=4, exllama_config={"version":2}) model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config = gptq_config) Note that only 4-bit models are supported for now. Furthermore, it is recommended to deactivate the exllama kernels if you are finetuning a quantized model with peft. You can find the benchmark of these kernels [here](https://github.com/huggingface/optimum/tree/main/tests/benchmark#gptq-benchmark) #### Fine-tune a quantized model With the official support of adapters in the Hugging Face ecosystem, you can fine-tune models that have been quantized with GPTQ. Please have a look at [`peft`](https://github.com/huggingface/peft) library for more details. ### Example demo Check out the Google Colab [notebook](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) to learn how to quantize your model with GPTQ and how finetune the quantized model with peft. ### GPTQConfig [[autodoc]] GPTQConfig ## `bitsandbytes` Integration 🤗 Transformers is closely integrated with most used modules on `bitsandbytes`. You can load your model in 8-bit precision with few lines of code. This is supported by most of the GPU hardwares since the `0.37.0` release of `bitsandbytes`. Learn more about the quantization method in the [LLM.int8()](https://arxiv.org/abs/2208.07339) paper, or the [blogpost](https://huggingface.co/blog/hf-bitsandbytes-integration) about the collaboration. Since its `0.39.0` release, you can load any model that supports `device_map` using 4-bit quantization, leveraging FP4 data type. If you want to quantize your own pytorch model, check out this [documentation](https://huggingface.co/docs/accelerate/main/en/usage_guides/quantization) from 🤗 Accelerate library. Here are the things you can do using `bitsandbytes` integration ### General usage You can quantize a model by using the `load_in_8bit` or `load_in_4bit` argument when calling the [`~PreTrainedModel.from_pretrained`] method as long as your model supports loading with 🤗 Accelerate and contains `torch.nn.Linear` layers. This should work for any modality as well. thon from transformers import AutoModelForCausalLM model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True) model_4bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_4bit=True) By default all other modules (e.g. `torch.nn.LayerNorm`) will be converted in `torch.float16`, but if you want to change their `dtype` you can overwrite the `torch_dtype` argument: thon >>> import torch >>> from transformers import AutoModelForCausalLM >>> model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True, torch_dtype=torch.float32) >>> model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype torch.float32 ### FP4 quantization #### Requirements Make sure that you have installed the requirements below before running any of the code snippets below. - Latest `bitsandbytes` library `pip install bitsandbytes>=0.39.0` - Install latest `accelerate` `pip install --upgrade accelerate` - Install latest `transformers` `pip install --upgrade transformers` #### Tips and best practices - **Advanced usage:** Refer to [this Google Colab notebook](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf) for advanced usage of 4-bit quantization with all the possible options. - **Faster inference with `batch_size=1` :** Since the `0.40.0` release of bitsandbytes, for `batch_size=1` you can benefit from fast inference. Check out [these release notes](https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0) and make sure to have a version that is greater than `0.40.0` to benefit from this feature out of the box. - **Training:** According to [QLoRA paper](https://arxiv.org/abs/2305.14314), for training 4-bit base models (e.g. using LoRA adapters) one should use `bnb_4bit_quant_type='nf4'`. - **Inference:** For inference, `bnb_4bit_quant_type` does not have a huge impact on the performance. However for consistency with the model's weights, make sure you use the same `bnb_4bit_compute_dtype` and `torch_dtype` arguments. #### Load a large model in 4bit By using `load_in_4bit=True` when calling the `.from_pretrained` method, you can divide your memory use by 4 (roughly). thon # pip install transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "bigscience/bloom-1b7" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_4bit=True) Note that once a model has been loaded in 4-bit it is currently not possible to push the quantized weights on the Hub. Note also that you cannot train 4-bit weights as this is not supported yet. However you can use 4-bit models to train extra parameters, this will be covered in the next section. ### Load a large model in 8bit You can load a model by roughly halving the memory requirements by using `load_in_8bit=True` argument when calling `.from_pretrained` method thon # pip install transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "bigscience/bloom-1b7" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True) Then, use your model as you would usually use a [`PreTrainedModel`]. You can check the memory footprint of your model with `get_memory_footprint` method. thon print(model.get_memory_footprint()) With this integration we were able to load large models on smaller devices and run them without any issue. Note that once a model has been loaded in 8-bit it is currently not possible to push the quantized weights on the Hub except if you use the latest `transformers` and `bitsandbytes`. Note also that you cannot train 8-bit weights as this is not supported yet. However you can use 8-bit models to train extra parameters, this will be covered in the next section. Note also that `device_map` is optional but setting `device_map = 'auto'` is prefered for inference as it will dispatch efficiently the model on the available ressources. #### Advanced use cases Here we will cover some advanced use cases you can perform with FP4 quantization ##### Change the compute dtype The compute dtype is used to change the dtype that will be used during computation. For example, hidden states could be in `float32` but computation can be set to bf16 for speedups. By default, the compute dtype is set to `float32`. thon import torch from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16) ##### Using NF4 (Normal Float 4) data type You can also use the NF4 data type, which is a new 4bit datatype adapted for weights that have been initialized using a normal distribution. For that run: thon from transformers import BitsAndBytesConfig nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", ) model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config) ##### Use nested quantization for more memory efficient inference We also advise users to use the nested quantization technique. This saves more memory at no additional performance - from our empirical observations, this enables fine-tuning llama-13b model on an NVIDIA-T4 16GB with a sequence length of 1024, batch size of 1 and gradient accumulation steps of 4. thon from transformers import BitsAndBytesConfig double_quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) model_double_quant = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=double_quant_config) ### Push quantized models on the 🤗 Hub You can push a quantized model on the Hub by naively using `push_to_hub` method. This will first push the quantization configuration file, then push the quantized model weights. Make sure to use `bitsandbytes>0.37.2` (at this time of writing, we tested it on `bitsandbytes==0.38.0.post1`) to be able to use this feature. thon from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") model.push_to_hub("bloom-560m-8bit") Pushing 8bit models on the Hub is strongely encouraged for large models. This will allow the community to benefit from the memory footprint reduction and loading for example large models on a Google Colab. ### Load a quantized model from the 🤗 Hub You can load a quantized model from the Hub by using `from_pretrained` method. Make sure that the pushed weights are quantized, by checking that the attribute `quantization_config` is present in the model configuration object. thon from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto") Note that in this case, you don't need to specify the arguments `load_in_8bit=True`, but you need to make sure that `bitsandbytes` and `accelerate` are installed. Note also that `device_map` is optional but setting `device_map = 'auto'` is prefered for inference as it will dispatch efficiently the model on the available ressources. ### Advanced use cases This section is intended to advanced users, that want to explore what it is possible to do beyond loading and running 8-bit models. #### Offload between `cpu` and `gpu` One of the advanced use case of this is being able to load a model and dispatch the weights between `CPU` and `GPU`. Note that the weights that will be dispatched on CPU **will not** be converted in 8-bit, thus kept in `float32`. This feature is intended for users that want to fit a very large model and dispatch the model between GPU and CPU. First, load a [`BitsAndBytesConfig`] from `transformers` and set the attribute `llm_int8_enable_fp32_cpu_offload` to `True`: thon from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True) Let's say you want to load `bigscience/bloom-1b7` model, and you have just enough GPU RAM to fit the entire model except the `lm_head`. Therefore write a custom device_map as follows: thon device_map = { "transformer.word_embeddings": 0, "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h": 0, "transformer.ln_f": 0, } And load your model as follows: thon model_8bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", device_map=device_map, quantization_config=quantization_config, ) And that's it! Enjoy your model! #### Play with `llm_int8_threshold` You can play with the `llm_int8_threshold` argument to change the threshold of the outliers. An "outlier" is a hidden state value that is greater than a certain threshold. This corresponds to the outlier threshold for outlier detection as described in `LLM.int8()` paper. Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning). This argument can impact the inference speed of the model. We suggest to play with this parameter to find which one is the best for your use case. thon from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_threshold=10, ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map=device_map, quantization_config=quantization_config, ) tokenizer = AutoTokenizer.from_pretrained(model_id) #### Skip the conversion of some modules Some models has several modules that needs to be not converted in 8-bit to ensure stability. For example Jukebox model has several `lm_head` modules that should be skipped. Play with `llm_int8_skip_modules` thon from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_skip_modules=["lm_head"], ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map=device_map, quantization_config=quantization_config, ) tokenizer = AutoTokenizer.from_pretrained(model_id) #### Fine-tune a model that has been loaded in 8-bit With the official support of adapters in the Hugging Face ecosystem, you can fine-tune models that have been loaded in 8-bit. This enables fine-tuning large models such as `flan-t5-large` or `facebook/opt-6.7b` in a single google Colab. Please have a look at [`peft`](https://github.com/huggingface/peft) library for more details. Note that you don't need to pass `device_map` when loading the model for training. It will automatically load your model on your GPU. You can also set the device map to a specific device if needed (e.g. `cuda:0`, `0`, `torch.device('cuda:0')`). Please note that `device_map=auto` should be used for inference only. ### BitsAndBytesConfig [[autodoc]] BitsAndBytesConfig ## Quantization with 🤗 `optimum` Please have a look at [Optimum documentation](https://huggingface.co/docs/optimum/index) to learn more about quantization methods that are supported by `optimum` and see if these are applicable for your use case.
main_classes/pipelines.md
# Pipelines The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the [task summary](../task_summary) for examples of use. There are two categories of pipeline abstractions to be aware about: - The [`pipeline`] which is the most powerful object encapsulating all other pipelines. - Task-specific pipelines are available for [audio](#audio), [computer vision](#computer-vision), [natural language processing](#natural-language-processing), and [multimodal](#multimodal) tasks. ## The pipeline abstraction The *pipeline* abstraction is a wrapper around all the other available pipelines. It is instantiated as any other pipeline but can provide additional quality of life. Simple call on one item: thon >>> pipe = pipeline("text-classification") >>> pipe("This restaurant is awesome") [{'label': 'POSITIVE', 'score': 0.9998743534088135}] If you want to use a specific model from the [hub](https://huggingface.co) you can ignore the task if the model on the hub already defines it: thon >>> pipe = pipeline(model="roberta-large-mnli") >>> pipe("This restaurant is awesome") [{'label': 'NEUTRAL', 'score': 0.7313136458396912}] To call a pipeline on many items, you can call it with a *list*. thon >>> pipe = pipeline("text-classification") >>> pipe(["This restaurant is awesome", "This restaurant is awful"]) [{'label': 'POSITIVE', 'score': 0.9998743534088135}, {'label': 'NEGATIVE', 'score': 0.9996669292449951}] To iterate over full datasets it is recommended to use a `dataset` directly. This means you don't need to allocate the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on GPU. If it doesn't don't hesitate to create an issue. thon import datasets from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from tqdm.auto import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = datasets.load_dataset("superb", name="asr", split="test") # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset for out in tqdm(pipe(KeyDataset(dataset, "file"))): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": .} # . For ease of use, a generator is also possible: thon from transformers import pipeline pipe = pipeline("text-classification") def data(): while True: # This could come from a dataset, a database, a queue or HTTP request # in a server # Caveat: because this is iterative, you cannot use `num_workers > 1` variable # to use multiple threads to preprocess data. You can still have 1 thread that # does the preprocessing while the main runs the big inference yield "This is a test" for out in pipe(data()): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": .} # . [[autodoc]] pipeline ## Pipeline batching All pipelines can use batching. This will work whenever the pipeline uses its streaming ability (so when passing lists or `Dataset` or `generator`). thon from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset import datasets dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised") pipe = pipeline("text-classification", device=0) for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"): print(out) # [{'label': 'POSITIVE', 'score': 0.9998743534088135}] # Exactly the same output as before, but the content are passed # as batches to the model However, this is not automatically a win for performance. It can be either a 10x speedup or 5x slowdown depending on hardware, data and the actual model being used. Example where it's mostly a speedup: thon from transformers import pipeline from torch.utils.data import Dataset from tqdm.auto import tqdm pipe = pipeline("text-classification", device=0) class MyDataset(Dataset): def __len__(self): return 5000 def __getitem__(self, i): return "This is a test" dataset = MyDataset() for batch_size in [1, 8, 64, 256]: print("-" * 30) print(f"Streaming batch_size={batch_size}") for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)): pass # On GTX 970 ------------------------------ Streaming no batching 100%|██████████████████████████████████████████████████████████████████████| 5000/5000 [00:26<00:00, 187.52it/s] ------------------------------ Streaming batch_size=8 100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:04<00:00, 1205.95it/s] ------------------------------ Streaming batch_size=64 100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:02<00:00, 2478.24it/s] ------------------------------ Streaming batch_size=256 100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:01<00:00, 2554.43it/s] (diminishing returns, saturated the GPU) Example where it's most a slowdown: thon class MyDataset(Dataset): def __len__(self): return 5000 def __getitem__(self, i): if i % 64 == 0: n = 100 else: n = 1 return "This is a test" * n This is a occasional very long sentence compared to the other. In that case, the **whole** batch will need to be 400 tokens long, so the whole batch will be [64, 400] instead of [64, 4], leading to the high slowdown. Even worse, on bigger batches, the program simply crashes. ------------------------------ Streaming no batching 100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:05<00:00, 183.69it/s] ------------------------------ Streaming batch_size=8 100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:03<00:00, 265.74it/s] ------------------------------ Streaming batch_size=64 100%|██████████████████████████████████████████████████████████████████████| 1000/1000 [00:26<00:00, 37.80it/s] ------------------------------ Streaming batch_size=256 0%| | 0/1000 [00:00, ?it/s] Traceback (most recent call last): File "/home/nicolas/src/transformers/test.py", line 42, in <module for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)): . q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head) RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch) There are no good (general) solutions for this problem, and your mileage may vary depending on your use cases. Rule of thumb: For users, a rule of thumb is: - **Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the only way to go.** - If you are latency constrained (live product doing inference), don't batch. - If you are using CPU, don't batch. - If you are using throughput (you want to run your model on a bunch of static data), on GPU, then: - If you have no clue about the size of the sequence_length ("natural" data), by default don't batch, measure and try tentatively to add it, add OOM checks to recover when it will fail (and it will at some point if you don't control the sequence_length.) - If your sequence_length is super regular, then batching is more likely to be VERY interesting, measure and push it until you get OOMs. - The larger the GPU the more likely batching is going to be more interesting - As soon as you enable batching, make sure you can handle OOMs nicely. ## Pipeline chunk batching `zero-shot-classification` and `question-answering` are slightly specific in the sense, that a single input might yield multiple forward pass of a model. Under normal circumstances, this would yield issues with `batch_size` argument. In order to circumvent this issue, both of these pipelines are a bit specific, they are `ChunkPipeline` instead of regular `Pipeline`. In short: thon preprocessed = pipe.preprocess(inputs) model_outputs = pipe.forward(preprocessed) outputs = pipe.postprocess(model_outputs) Now becomes: thon all_model_outputs = [] for preprocessed in pipe.preprocess(inputs): model_outputs = pipe.forward(preprocessed) all_model_outputs.append(model_outputs) outputs = pipe.postprocess(all_model_outputs) This should be very transparent to your code because the pipelines are used in the same way. This is a simplified view, since the pipeline can handle automatically the batch to ! Meaning you don't have to care about how many forward passes you inputs are actually going to trigger, you can optimize the `batch_size` independently of the inputs. The caveats from the previous section still apply. ## Pipeline custom code If you want to override a specific pipeline. Don't hesitate to create an issue for your task at hand, the goal of the pipeline is to be easy to use and support most cases, so `transformers` could maybe support your use case. If you want to try simply you can: - Subclass your pipeline of choice thon class MyPipeline(TextClassificationPipeline): def postprocess(): # Your code goes here scores = scores * 100 # And here my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ) # or if you use *pipeline* function, then: my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline) That should enable you to do all the custom code you want. ## Implementing a pipeline [Implementing a new pipeline](../add_new_pipeline) ## Audio Pipelines available for audio tasks include the following. ### AudioClassificationPipeline [[autodoc]] AudioClassificationPipeline - __call__ - all ### AutomaticSpeechRecognitionPipeline [[autodoc]] AutomaticSpeechRecognitionPipeline - __call__ - all ### TextToAudioPipeline [[autodoc]] TextToAudioPipeline - __call__ - all ### ZeroShotAudioClassificationPipeline [[autodoc]] ZeroShotAudioClassificationPipeline - __call__ - all ## Computer vision Pipelines available for computer vision tasks include the following. ### DepthEstimationPipeline [[autodoc]] DepthEstimationPipeline - __call__ - all ### ImageClassificationPipeline [[autodoc]] ImageClassificationPipeline - __call__ - all ### ImageSegmentationPipeline [[autodoc]] ImageSegmentationPipeline - __call__ - all ### ImageToImagePipeline [[autodoc]] ImageToImagePipeline - __call__ - all ### ObjectDetectionPipeline [[autodoc]] ObjectDetectionPipeline - __call__ - all ### VideoClassificationPipeline [[autodoc]] VideoClassificationPipeline - __call__ - all ### ZeroShotImageClassificationPipeline [[autodoc]] ZeroShotImageClassificationPipeline - __call__ - all ### ZeroShotObjectDetectionPipeline [[autodoc]] ZeroShotObjectDetectionPipeline - __call__ - all ## Natural Language Processing Pipelines available for natural language processing tasks include the following. ### ConversationalPipeline [[autodoc]] Conversation [[autodoc]] ConversationalPipeline - __call__ - all ### FillMaskPipeline [[autodoc]] FillMaskPipeline - __call__ - all ### NerPipeline [[autodoc]] NerPipeline See [`TokenClassificationPipeline`] for all details. ### QuestionAnsweringPipeline [[autodoc]] QuestionAnsweringPipeline - __call__ - all ### SummarizationPipeline [[autodoc]] SummarizationPipeline - __call__ - all ### TableQuestionAnsweringPipeline [[autodoc]] TableQuestionAnsweringPipeline - __call__ ### TextClassificationPipeline [[autodoc]] TextClassificationPipeline - __call__ - all ### TextGenerationPipeline [[autodoc]] TextGenerationPipeline - __call__ - all ### Text2TextGenerationPipeline [[autodoc]] Text2TextGenerationPipeline - __call__ - all ### TokenClassificationPipeline [[autodoc]] TokenClassificationPipeline - __call__ - all ### TranslationPipeline [[autodoc]] TranslationPipeline - __call__ - all ### ZeroShotClassificationPipeline [[autodoc]] ZeroShotClassificationPipeline - __call__ - all ## Multimodal Pipelines available for multimodal tasks include the following. ### DocumentQuestionAnsweringPipeline [[autodoc]] DocumentQuestionAnsweringPipeline - __call__ - all ### FeatureExtractionPipeline [[autodoc]] FeatureExtractionPipeline - __call__ - all ### ImageToTextPipeline [[autodoc]] ImageToTextPipeline - __call__ - all ### MaskGenerationPipeline [[autodoc]] MaskGenerationPipeline - __call__ - all ### VisualQuestionAnsweringPipeline [[autodoc]] VisualQuestionAnsweringPipeline - __call__ - all ## Parent class: `Pipeline` [[autodoc]] Pipeline
main_classes/logging.md
# Logging 🤗 Transformers has a centralized logging system, so that you can setup the verbosity of the library easily. Currently the default verbosity of the library is `WARNING`. To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity to the INFO level. thon import transformers transformers.logging.set_verbosity_info() You can also use the environment variable `TRANSFORMERS_VERBOSITY` to override the default verbosity. You can set it to one of the following: `debug`, `info`, `warning`, `error`, `critical`. For example: ```bash TRANSFORMERS_VERBOSITY=error ./myprogram.py Additionally, some `warnings` can be disabled by setting the environment variable `TRANSFORMERS_NO_ADVISORY_WARNINGS` to a true value, like *1*. This will disable any warning that is logged using [`logger.warning_advice`]. For example: ```bash TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: thon from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger("transformers") logger.info("INFO") logger.warning("WARN") All the methods of this logging module are documented below, the main ones are [`logging.get_verbosity`] to get the current level of verbosity in the logger and [`logging.set_verbosity`] to set the verbosity to the level of your choice. In order (from the least verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are: - `transformers.logging.CRITICAL` or `transformers.logging.FATAL` (int value, 50): only report the most critical errors. - `transformers.logging.ERROR` (int value, 40): only report errors. - `transformers.logging.WARNING` or `transformers.logging.WARN` (int value, 30): only reports error and warnings. This the default level used by the library. - `transformers.logging.INFO` (int value, 20): reports error, warnings and basic information. - `transformers.logging.DEBUG` (int value, 10): report all information. By default, `tqdm` progress bars will be displayed during model download. [`logging.disable_progress_bar`] and [`logging.enable_progress_bar`] can be used to suppress or unsuppress this behavior. ## `logging` vs `warnings` Python has two logging systems that are often used in conjunction: `logging`, which is explained above, and `warnings`, which allows further classification of warnings in specific buckets, e.g., `FutureWarning` for a feature or path that has already been deprecated and `DeprecationWarning` to indicate an upcoming deprecation. We use both in the `transformers` library. We leverage and adapt `logging`'s `captureWarning` method to allow management of these warning messages by the verbosity setters above. What does that mean for developers of the library? We should respect the following heuristic: - `warnings` should be favored for developers of the library and libraries dependent on `transformers` - `logging` should be used for end-users of the library using it in every-day projects See reference of the `captureWarnings` method below. [[autodoc]] logging.captureWarnings ## Base setters [[autodoc]] logging.set_verbosity_error [[autodoc]] logging.set_verbosity_warning [[autodoc]] logging.set_verbosity_info [[autodoc]] logging.set_verbosity_debug ## Other functions [[autodoc]] logging.get_verbosity [[autodoc]] logging.set_verbosity [[autodoc]] logging.get_logger [[autodoc]] logging.enable_default_handler [[autodoc]] logging.disable_default_handler [[autodoc]] logging.enable_explicit_format [[autodoc]] logging.reset_format [[autodoc]] logging.enable_progress_bar [[autodoc]] logging.disable_progress_bar
main_classes/agent.md
# Agents & Tools Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. To learn more about agents and tools make sure to read the [introductory guide](../transformers_agents). This page contains the API docs for the underlying classes. ## Agents We provide three types of agents: [`HfAgent`] uses inference endpoints for opensource models, [`LocalAgent`] uses a model of your choice locally and [`OpenAiAgent`] uses OpenAI closed models. ### HfAgent [[autodoc]] HfAgent ### LocalAgent [[autodoc]] LocalAgent ### OpenAiAgent [[autodoc]] OpenAiAgent ### AzureOpenAiAgent [[autodoc]] AzureOpenAiAgent ### Agent [[autodoc]] Agent - chat - run - prepare_for_new_chat ## Tools ### load_tool [[autodoc]] load_tool ### Tool [[autodoc]] Tool ### PipelineTool [[autodoc]] PipelineTool ### RemoteTool [[autodoc]] RemoteTool ### launch_gradio_demo [[autodoc]] launch_gradio_demo ## Agent Types Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to correctly render these returns in ipython (jupyter, colab, ipython notebooks, ), we implement wrapper classes around these types. The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image object should still behave as a `PIL.Image`. These types have three specific purposes: - Calling `to_raw` on the type should return the underlying object - Calling `to_string` on the type should return the object as a string: that can be the string in case of an `AgentText` but will be the path of the serialized version of the object in other instances - Displaying it in an ipython kernel should display the object correctly ### AgentText [[autodoc]] transformers.tools.agent_types.AgentText ### AgentImage [[autodoc]] transformers.tools.agent_types.AgentImage ### AgentAudio [[autodoc]] transformers.tools.agent_types.AgentAudio
main_classes/output.md
# Model outputs All models have outputs that are instances of subclasses of [`~utils.ModelOutput`]. Those are data structures containing all the information returned by the model, but that can also be used as tuples or dictionaries. Let's see how this looks in an example: thon from transformers import BertTokenizer, BertForSequenceClassification import torch tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = BertForSequenceClassification.from_pretrained("bert-base-uncased") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) The `outputs` object is a [`~modeling_outputs.SequenceClassifierOutput`], as we can see in the documentation of that class below, it means it has an optional `loss`, a `logits`, an optional `hidden_states` and an optional `attentions` attribute. Here we have the `loss` since we passed along `labels`, but we don't have `hidden_states` and `attentions` because we didn't pass `output_hidden_states=True` or `output_attentions=True`. When passing `output_hidden_states=True` you may expect the `outputs.hidden_states[-1]` to match `outputs.last_hidden_states` exactly. However, this is not always the case. Some models apply normalization or subsequent process to the last hidden state when it's returned. You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get `None`. Here for instance `outputs.loss` is the loss computed by the model, and `outputs.attentions` is `None`. When considering our `outputs` object as tuple, it only considers the attributes that don't have `None` values. Here for instance, it has two elements, `loss` then `logits`, so thon outputs[:2] will return the tuple `(outputs.loss, outputs.logits)` for instance. When considering our `outputs` object as dictionary, it only considers the attributes that don't have `None` values. Here for instance, it has two keys that are `loss` and `logits`. We document here the generic model outputs that are used by more than one model type. Specific output types are documented on their corresponding model page. ## ModelOutput [[autodoc]] utils.ModelOutput - to_tuple ## BaseModelOutput [[autodoc]] modeling_outputs.BaseModelOutput ## BaseModelOutputWithPooling [[autodoc]] modeling_outputs.BaseModelOutputWithPooling ## BaseModelOutputWithCrossAttentions [[autodoc]] modeling_outputs.BaseModelOutputWithCrossAttentions ## BaseModelOutputWithPoolingAndCrossAttentions [[autodoc]] modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions ## BaseModelOutputWithPast [[autodoc]] modeling_outputs.BaseModelOutputWithPast ## BaseModelOutputWithPastAndCrossAttentions [[autodoc]] modeling_outputs.BaseModelOutputWithPastAndCrossAttentions ## Seq2SeqModelOutput [[autodoc]] modeling_outputs.Seq2SeqModelOutput ## CausalLMOutput [[autodoc]] modeling_outputs.CausalLMOutput ## CausalLMOutputWithCrossAttentions [[autodoc]] modeling_outputs.CausalLMOutputWithCrossAttentions ## CausalLMOutputWithPast [[autodoc]] modeling_outputs.CausalLMOutputWithPast ## MaskedLMOutput [[autodoc]] modeling_outputs.MaskedLMOutput ## Seq2SeqLMOutput [[autodoc]] modeling_outputs.Seq2SeqLMOutput ## NextSentencePredictorOutput [[autodoc]] modeling_outputs.NextSentencePredictorOutput ## SequenceClassifierOutput [[autodoc]] modeling_outputs.SequenceClassifierOutput ## Seq2SeqSequenceClassifierOutput [[autodoc]] modeling_outputs.Seq2SeqSequenceClassifierOutput ## MultipleChoiceModelOutput [[autodoc]] modeling_outputs.MultipleChoiceModelOutput ## TokenClassifierOutput [[autodoc]] modeling_outputs.TokenClassifierOutput ## QuestionAnsweringModelOutput [[autodoc]] modeling_outputs.QuestionAnsweringModelOutput ## Seq2SeqQuestionAnsweringModelOutput [[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput ## Seq2SeqSpectrogramOutput [[autodoc]] modeling_outputs.Seq2SeqSpectrogramOutput ## SemanticSegmenterOutput [[autodoc]] modeling_outputs.SemanticSegmenterOutput ## ImageClassifierOutput [[autodoc]] modeling_outputs.ImageClassifierOutput ## ImageClassifierOutputWithNoAttention [[autodoc]] modeling_outputs.ImageClassifierOutputWithNoAttention ## DepthEstimatorOutput [[autodoc]] modeling_outputs.DepthEstimatorOutput ## Wav2Vec2BaseModelOutput [[autodoc]] modeling_outputs.Wav2Vec2BaseModelOutput ## XVectorOutput [[autodoc]] modeling_outputs.XVectorOutput ## Seq2SeqTSModelOutput [[autodoc]] modeling_outputs.Seq2SeqTSModelOutput ## Seq2SeqTSPredictionOutput [[autodoc]] modeling_outputs.Seq2SeqTSPredictionOutput ## SampleTSPredictionOutput [[autodoc]] modeling_outputs.SampleTSPredictionOutput ## TFBaseModelOutput [[autodoc]] modeling_tf_outputs.TFBaseModelOutput ## TFBaseModelOutputWithPooling [[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPooling ## TFBaseModelOutputWithPoolingAndCrossAttentions [[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions ## TFBaseModelOutputWithPast [[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPast ## TFBaseModelOutputWithPastAndCrossAttentions [[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions ## TFSeq2SeqModelOutput [[autodoc]] modeling_tf_outputs.TFSeq2SeqModelOutput ## TFCausalLMOutput [[autodoc]] modeling_tf_outputs.TFCausalLMOutput ## TFCausalLMOutputWithCrossAttentions [[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions ## TFCausalLMOutputWithPast [[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithPast ## TFMaskedLMOutput [[autodoc]] modeling_tf_outputs.TFMaskedLMOutput ## TFSeq2SeqLMOutput [[autodoc]] modeling_tf_outputs.TFSeq2SeqLMOutput ## TFNextSentencePredictorOutput [[autodoc]] modeling_tf_outputs.TFNextSentencePredictorOutput ## TFSequenceClassifierOutput [[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutput ## TFSeq2SeqSequenceClassifierOutput [[autodoc]] modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput ## TFMultipleChoiceModelOutput [[autodoc]] modeling_tf_outputs.TFMultipleChoiceModelOutput ## TFTokenClassifierOutput [[autodoc]] modeling_tf_outputs.TFTokenClassifierOutput ## TFQuestionAnsweringModelOutput [[autodoc]] modeling_tf_outputs.TFQuestionAnsweringModelOutput ## TFSeq2SeqQuestionAnsweringModelOutput [[autodoc]] modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput ## FlaxBaseModelOutput [[autodoc]] modeling_flax_outputs.FlaxBaseModelOutput ## FlaxBaseModelOutputWithPast [[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPast ## FlaxBaseModelOutputWithPooling [[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPooling ## FlaxBaseModelOutputWithPastAndCrossAttentions [[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions ## FlaxSeq2SeqModelOutput [[autodoc]] modeling_flax_outputs.FlaxSeq2SeqModelOutput ## FlaxCausalLMOutputWithCrossAttentions [[autodoc]] modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions ## FlaxMaskedLMOutput [[autodoc]] modeling_flax_outputs.FlaxMaskedLMOutput ## FlaxSeq2SeqLMOutput [[autodoc]] modeling_flax_outputs.FlaxSeq2SeqLMOutput ## FlaxNextSentencePredictorOutput [[autodoc]] modeling_flax_outputs.FlaxNextSentencePredictorOutput ## FlaxSequenceClassifierOutput [[autodoc]] modeling_flax_outputs.FlaxSequenceClassifierOutput ## FlaxSeq2SeqSequenceClassifierOutput [[autodoc]] modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput ## FlaxMultipleChoiceModelOutput [[autodoc]] modeling_flax_outputs.FlaxMultipleChoiceModelOutput ## FlaxTokenClassifierOutput [[autodoc]] modeling_flax_outputs.FlaxTokenClassifierOutput ## FlaxQuestionAnsweringModelOutput [[autodoc]] modeling_flax_outputs.FlaxQuestionAnsweringModelOutput ## FlaxSeq2SeqQuestionAnsweringModelOutput [[autodoc]] modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput
main_classes/keras_callbacks.md
# Keras callbacks When training a Transformers model with Keras, there are some library-specific callbacks available to automate common tasks: ## KerasMetricCallback [[autodoc]] KerasMetricCallback ## PushToHubCallback [[autodoc]] PushToHubCallback
main_classes/image_processor.md
# Image Processor An image processor is in charge of preparing input features for vision models and post processing their outputs. This includes transformations such as resizing, normalization, and conversion to PyTorch, TensorFlow, Flax and Numpy tensors. It may also include model specific post-processing such as converting logits to segmentation masks. ## ImageProcessingMixin [[autodoc]] image_processing_utils.ImageProcessingMixin - from_pretrained - save_pretrained ## BatchFeature [[autodoc]] BatchFeature ## BaseImageProcessor [[autodoc]] image_processing_utils.BaseImageProcessor